Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 233/679 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Manufacturing Leadership 2011 Awards Event, Oracle Winners!

    - by stephen.slade(at)oracle.com
    Ready to Revolutionize Manufacturing?During the annual Manufacturing Leadership 2011 awards event, over 20 Oracle customers will be receiving leadership awards in various categories for mastery in applications, technology and/or innovation. Held at the Breakers Hotel in Palm Beach May 6-9, Oracle will sponsor exhibits and speaking engagements as well as hosting customers and prospects at the event.If you are in manufacturing and want to achieve the upper quartile of performance, up with the best-in-class performers, this is the event you should consider attending. Four time Superbowl champion quarterback Terry Bradshaw will be the Keynote speaker.Event Link: http://blog.managingautomation.com/summit/

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • Oracle's Sun ZFS Storage Appliances and Oracle VM

    - by uwes
    Oracle's Sun ZFS Storage Appliance Is the Optimal Platform for Deploying Consolidated Applications in an Oracle Virtual Machine (OVM) Environment Unsurpassed Integration - Oracle VM and Storage Engineering teams provide seamless integration points and an Oracle VM Connect Plug-In for Sun ZFS Storage Appliance in FC, NFS, and iSCSI Environments.  And Sun ZFS Storage is engineered and tested to work with Oracle VM agility features including Live (VM) Migration and oracle RAC Live Migration. More information could befound under the following links: ZFS Storage Appliance Server Virtualization Oracle.com page ZFS Storage Appliance Oracle.com page ZFS Storage Appliance Oracle Technical Network.com page Software download support.oracle.com page

    Read the article

  • Starting Spring Lineup

    - by onefloridacoder
    This morning I finished removing all VS2008 related frameworks and installed items related to VS2010 based on posts around the community.  Here’s what I started with on my dev laptop, the config for my laptop:  HP Pavilion dv6  Win7 64-bit 4Gb RAM Installed Developer Tools and Frameworks: Sync 2.0 SDK Visual Studio 2010 Pex Power Tools Enterprise Library 5.0 SQL Server 2008 Developer Edition Visual Studio 2008 Ultimate Expression Blend 4 RC Team Foundation Server 2010 Team Foundation Server 2010 SDK   The only item I did not reinstall on top of VS2010 was ReSharper 4.5 only because I read mixed reviews on the dev experience.  At this point I really just want to get use to the new(er) IDEs without adding any confusion to my dev machine.  I’ll level off my desire for early adoption at Blend, EntLib 5.0, and Pex - I’m also interested in the Moles integration as well.   Something else I didn’t have to install was my IDE theme which was left behind in my user folder was merged during installation – afterwards it was nice to see once the dust settled.

    Read the article

  • Harry Foxwell talks about "Oracle Solaris 11 System Administration: The Complete Reference"

    - by Glynn Foster
    In a previous blog entry, New Oracle Solaris 11 Administration book, I blogged about the fact that a new book has been written to provide an excellent resource for administrators starting to learn some of the new features in Oracle Solaris 11. Despite an extensive set of online resources from the Oracle Technology Network, it's also useful to have something in the bookshelf that you can quickly refer to - and Harry Foxwell and his team of co-authors have done just that. Check out the video below where Harry goes into detail about why the book was written, details about the target audience, and what he's excited about in Oracle Solaris 11. Best of all though, is the fact that this is a brilliant book for any inspiring Linux administrator who wants to start getting to know the Oracle Solaris operating system a little better.

    Read the article

  • Meet us at Devoxx!

    - by terrencebarr
    It’s Devoxx time again! If you’re at Devoxx, sure to check the schedule for a whole range of exciting Java and Oracle topics: JavaFX, OpenJDK, JDK 7, Java Embedded, Java EE, JCP, NetBeans, Greenfoot, as well as Java Duchess and JUG meetings. Talks, labs, BOFs, demos, and more. Embedded Java will also play a prominent role. Want to see Java on Raspberry Pi in action? Find out why what’s happening with Java in IoT (Internet of Things)? Play with NetBeans and Tinkerforge? Check out the full Devoxx schedule. Why do I think Java has the most exciting part of its future still ahead of it? Catch up with me at my talk on Wed 14:00:  ”Small, Smart, Connected: Java in the Internet of Things”. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: embedded, Embedded Java, Java, Java Embedded, JavaFX, NetBeans, OpenJDK

    Read the article

  • Extreme Optimization Numerical Libraries for .NET – Part 1 of n

    - by JoshReuben
    While many of my colleagues are fascinated in constructing the ultimate ViewModel or ServiceBus, I feel that this kind of plumbing code is re-invented far too many times – at some point in the near future, it will be out of the box standard infra. How many times have you been to a customer site and built a different variation of the same kind of code frameworks? How many times can you abstract Prism or reliable and discoverable WCF communication? As the bar is raised for whats bundled with the framework and more tasks become declarative, automated and configurable, Information Systems will expose a higher level of abstraction, forcing software engineers to focus on more advanced computer science and algorithmic tasks. I've spent the better half of the past decade building skills in .NET and expanding my mathematical horizons by working through the Schaums guides. In this series I am going to examine how these skillsets come together in the implementation provided by ExtremeOptimization. Download the trial version here: http://www.extremeoptimization.com/downloads.aspx Overview The library implements a set of algorithms for: linear algebra, complex numbers, numerical integration and differentiation, solving equations, optimization, random numbers, regression, ANOVA, statistical distributions, hypothesis tests. EONumLib combines three libraries in one - organized in a consistent namespace hierarchy. Mathematics Library - Extreme.Mathematics namespace Vector and Matrix Library - Extreme.Mathematics.LinearAlgebra namespace Statistics Library - Extreme.Statistics namespace System Requirements -.NET framework 4.0  Mathematics Library The classes are organized into the following namespace hierarchy: Extreme.Mathematics – common data types, exception types, and delegates. Extreme.Mathematics.Calculus - numerical integration and differentiation of functions. Extreme.Mathematics.Curves - points, lines and curves, including polynomials and Chebyshev approximations. curve fitting and interpolation. Extreme.Mathematics.Generic - generic arithmetic & linear algebra. Extreme.Mathematics.EquationSolvers - root finding algorithms. Extreme.Mathematics.LinearAlgebra - vectors , matrices , matrix decompositions, solvers for simultaneous linear equations and least squares. Extreme.Mathematics.Optimization – multi-d function optimization + linear programming. Extreme.Mathematics.SignalProcessing - one and two-dimensional discrete Fourier transforms. Extreme.Mathematics.SpecialFunctions

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • Preview - Profit, May 2010

    - by Aaron Lazenby
    Whew! Last Friday, we put the finishing touches on the May 2010 edition of Profit, Oracle's quarterly business and technology journal. The issue will be back from the printer and live on the website in mid-April. Here's a preview: 0 0 0 Turning Crisis into OpportunityDuring the depths of the financial crisis, San Francisco California-based Wells Fargo &Company launched a bold acquisition of Wachovia Bank--one of the largest financial services mergers in history. Learn how Oracle software helped Wells Fargo CFO Howard Atkins prepare his office for the merger--and assisted with the integration of the companies once the deal was done.Building on SuccessGlobal construction firm Hill International takes project management to new heightswith Oracle's Primavera solutions.?Product Management, In Black and whiteCatch up with Zebra Technologies to see how Oracle's Agile applications connectwith an existing Oracle E-Business Suite system. A Perfect MatchLearn how technology makes good medicine in this interview with National MarrowDonor Program CIO Michael Jones. The IT Ties the BindHow information systems are help­ing manage knowledge workers in a post-9-to-5work world.I'll post a link to the new edition once it's live. Hope you enjoy!

    Read the article

  • Oracle ADF Mobile - Develop iOS and Android Mobile Applications with Oracle ADF

    - by Shay Shmeltzer
    We are very happy to announce the release of Oracle ADF Mobile.  The new Oracle ADF Mobile enables developers to build applications that run on iOS and Android devices. Several unique aspects to Oracle ADF Mobile solution: Develop once run on many - same code base used for both iOS and Android applicaitons Uses Java - no need to learn device specific languages Leverage ADF - same concepts you are familiar with (component based UI construction, taskflow, data controls) Leverage JDeveloper - same development environment you know, same declarative and visual style. Create native looking applications - HTML 5 based UI components (that you can also skin) Use device services - Leverage the camera, SMS, location, contact etc without learning device specific APIs Create Hybrid applications - run on the device and able to consume remote data and UI if needed Here is the 3 minute introduction Oracle ADF Mobile is available as an extension to Oracle JDeveloper 11.1.2.3 - use the help->check for updates to install it. Then head over to the Oracle ADF Mobile page for all the resources you need. If you are an Oracle ADF developer, it's time to update your resume - you are now a mobile device developer too :-)

    Read the article

  • Non-English Character Display in Oracle SQL Developer

    - by thatjeffsmith
    I get a variation on this question at least once a week, if not more frequently. I’m from Israel, and the language on the databases is Hebrew. When I use the old and deprecated SQL*Plus (windows rich client) I can see the hebrew clearly, when I use the latest SQL Developer, I get gibberish. This question appears on the forums about every week or so as well. So what’s the deal? Well, it starts with a basic misunderstanding of NLS Client parameters. These should accurately reflect the language and locality setup on your LOCAL machine. DO NOT COPY what’s set in the database. The these parameters work together with the database so that information can be transferred back and forth correctly. Having the wrong NLS parameters locally can be bad. [ORACLE DOCS]Setting the NLS_LANG parameter properly is essential to proper data conversion. The character set that is specified by the NLS_LANG parameter should reflect the setting for the client operating system. Setting NLS_LANG correctly enables proper conversion from the client operating system character encoding to the database character set. When these settings are the same, Oracle Database assumes that the data being sent or received is encoded in the same character set as the database character set, so character set validation or conversion may not be performed. This can lead to corrupt data if conversions are necessary. OK, so what are you supposed to do? Set the Font! 9 times out of 10, this preference fixes the problem with display issues. Make sure you set a Font that supports the characters you’re trying to display. It’s as simple as that. This preference defines the font used to display characters in the editors and the data grids. If you have it set to a font that doesn’t have Hebrew character support – you’re not going to see Hebrew in SQL Developer. A few years ago…wow, like 15 years ago, I learned that the Tohama Font is pretty Unicode-friendly. Bad Font Selection A Font that’s not non-English friendly Good Font Selection Exact same text, except rendered with the Tahoma font Summary Having problems seeing non-English text in SQL Developer? Check the font! And do not start messing with NLS parameters without talking to your DBA first.

    Read the article

  • Új könyv: Az adattárház-készítés technológiája (Bánné Varga Gabriella)

    - by user645740
    Megjelent és már vásárolható Bánné Varga Gabriella könyve: Az adattárház-készítés technológiája. A könyv alcíme: az architektúrától a dimenzionális modellezésen át az üzletiintelligencia-alkalmazásokig Oracle eszközök ismertetésével. Hiánypótló és alapos könyv született Gabriellától! Magyar nyelven eddig elég szegényes volt az elérheto könyvek kínálata. Szerencsére ez most megváltozott. A könyv lektora Gollnhofer Gábor. A könyv témakörei: adattárház architektúrához kapcsolódó fogalmak fejlesztési módszertanok dimenzionális modellezés dimenziók, kiemelten az ido dimenzió, ügyfelek, ténytáblák folyamatok BI eszközök kitekintés/betekintés a kapcsolódó Oracle eszközökbe Megtiszteltetés, hogy segíthettem a könyv születését megjegyzéseimmel. Bánné Varga Gabriella könyve: Az adattárház-készítés technológiája.

    Read the article

  • TOMORROW! UPK for Testing Webinar

    - by Karen Rihs
    UPK Webinar:  UPK for Testing September 13, 2012 10 am pacific / 1 pm eastern As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Join us for an OAUG-sponsored event on Sept 13th to hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing. Click Here to Register

    Read the article

  • Where's My Windows Azure Subscriptions

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/03/wheres-my-windows-azure-subscriptions.aspxYesterday when I opened Windows Azure manage portal I found some resources were missed. I checked the website for those missed cloud service and they are still live. Then I checked my billing history but didn't found any problem. When I back to the portal I found that all of those resource are under my MSDN subscription. So I remembered that if this is related with the recently Windows Azure platform update.   This feature named "Enterprise Management", which provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution. By default, all existing windows azure account would have a default Windows Azure Active Directory (a.k.a. WAAD) associated. In the address bar I can find the default login WAAD of my account, which is "microsoft.onmicrosoft.com". To change the WAAD we can click "subscriptions" on top of the manage portal, select the active directory from the list of "filter by directory" and select the subscription we want to see, then press "apply". As you can see, the subscription under my MSDN was located in a WAAD named "beijingtelecom.onmicrosoft.com". This is because when Microsoft applied this feature, they will check if you have an existing WAAD in your subscription. If not, it will create a new one, otherwise it will use your WAAD and move your subscription into this directory. Since I created a WAAD for test several months ago, this subscription was moved to this directory.   To change the subscription's directory is simple. First we need to create a new WAAD with the name we preferred. As below I created a new directory named "shaunxu". Then select "settings" from the left navigation bar, select the subscription we wanted to change and click "edit directory". You don't have the permission to edit/change directory unless your Microsoft Account is the service administrator of this subscription. Then in the popup window, select the WAAD you want to change and press "next". All done. You need to log off and log in the portal then your subscription will be in the directory you wanted. And after these steps I can view my resources in this subscription.   Summary In this post I described how to change subscriptions into a new directory. With this new feature we can manage our Windows Azure subscription more flexible. But there are something we need keep in mind. 1. Only the service administrator could be able to move subscription. 2. Currently there's no way for us to see our Windows Azure services in more than one directory at the same time. Like me, I can see my services under "shaunxu.onmicrosoft.com" and I must change the filter directory from the "subscriptions" menu to see other services under "microsoft.onmicrosoft.com". 3. Currently we cannot delete an existing WAAD.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PRUEBAS DE ESPECIALIZACION GRATUITAS 2013

    - by agallego
    Consigue  tu Certificado de Especialista Oracle  de forma GRATUITA , 27 y 28 de Noviembre de 2013  Ahora puedes realizar los exámenes de implementación de las especializaciones de Oracle y convertirte en especialista. Podrás realizar cualquiera de los exámenes de implementación de la siguiente lista: Oracle Fusion Customer Relationship Management 11g Sales Certified Implementation Specialist (1Z0-456) Oracle Fusion Financials 11g General Ledger Implementation Specialist(1Z0-508) Oracle Fusion Financials 11g Accounts Payable Implementation Specialist(1Z0-507) Oracle Fusion Financials 11g Accounts Receivable Implementation Specialist(1Z0-506) Oracle Fusion Human Capital Management 11g Human Resources Certified Implementation Specialist (1Z0-584) Oracle Fusion Human Capital Management 11g Talent Management Certified Implementation Specialist (1Z0-585) Oracle ATG Web Commerce 10 Implementation Developer Certified Implementation Specialist (1Z0-510) Oracle Hyperion Planning 11 Essentials (1Z0-533) Oracle Business Intelligence Applications 7.9.6 for ERP Essentials (1Z0-525) Oracle Hyperion Financial Management 11 Essentials (1Z0-532) Oracle Business Intelligence Applications 7.9.6 for CRM Essentials (1Z0-524) Oracle Hyperion Data Relationship Management 11.1.2 Essentials (1Z1-588) Oracle Business Intelligence Foundation Suite 11g Essentials (1Z0-591) Oracle Essbase 11 Essentials (1Z0-531) SPARC T4-Based Server Installation Essentials (1Z0-597) Oracle Solaris 11 Installation and Configuration Essentials (1Z0-580) Sun ZFS Storage Appliance Certified Implementation Specialist Exalogic Elastic Cloud X2-2 Certified Implementation Specialist Oracle Exadata 11g Essentials (1Z0-536) Oracle VM 3 for x86 Essenstials (1Z0-590) Oracle Linux Fundamentals (1Z0-402) Oracle Linux System Administration (1Z0-403) Oracle Service-Oriented Architecture Certified Implementation Specialist (1Z0-451) Oracle Unified Business Process Management Suite 11g Certified Implementation Specialist (1Z0-560) Oracle WebLogic Server 12c Certified Implementation Specialist (1Z0-599) Oracle WebCenter Portal 11g Essentials (1Z0-541) Oracle WebCenter Content 11g Essentials (1Z0-542) Oracle Application Development Framework 11g Certified Implementation Specialist Oracle Identity Analytics 11g Certified Implementation Specialist (1Z0-545) Oracle Enterprise Manager 12c Essentials (1Z0-457) Puedes consultar la información acerca de los examenes en cada uno de los enlaces. Para prepararte los examenes sigue la Guia de estudio que encontrarás en la página de cada examen. Requisitos: ser  Partner Gold, Platinum o Diamond de Oracle y tener un usuario de Oracle Pearson Vue.  ¿Cuándo?: 27 y 28 de noviembre  a las (9:00, 12:00, 16:00)  ¿Dónde?: Core Networks, C.E.Parque Norte, Edificio Olmo, Planta 1 Serrano Galvache 56 | 28033, Madrid Para inscribirte: Create una cuenta en Pearson Vue (www.pearsonvue.com/oracle). Para Registrarte aquí. Para más información sobre el programa de especializaciones, haz clic aquí. No pierdas esta oportunidad e inscríbete hoy.  Para cualquier duda contactar con [email protected]. Ana María Gallego Partner Enablement Manager Spain and Portugal      

    Read the article

  • Extreme Performance and Scale Delivered by SOA on Oracle Exalogic

    - by J Swaroop
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Demands to incorporate internet-scale applications, data, and social media traffic with existing IT infrastructure require extreme availability, reliability, and scalability. In this session on industrial-strength SOA, learn how Oracle Exalogic and Oracle Exadata engineered systems address these requirements. Topics covered: (1) how SOA and BPM benefit from “hardware and software engineered for each other,” (2) how Oracle Exadata provides the data tier with unparalleled scalability and performance for SOA and BPM running on Oracle Exalogic (3) customer case studies (4) best practices and topology guidelines (5) information on tools that help operate, manage, provision, and deploy—to help reduce overall TCO. Extreme engineering at its best! Session details: 10/2/12 (Tuesday) 11:45 AM - Moscone South -308

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • Most Unprofessional Workplace

    - by TehGrumpyCoder
    I've worked lots of places in lots of roles: Delivery truck driver, Boilermaker, antenna rigger, Professional Musician, Electronic Technician, Electrical Engineer, and for most of my career: Software Turkey. I want to say this large company is the most unprofessional place I've ever worked, but then I think about other jobs such as TTI that stiffed us all for 10 months salary -- or had us work 2-1/2 years at 66% however you want to look at it, or maybe NeoPlanet with a cast from a bad sitcom running the show, I could go on, but I digress (as usual). So maybe this place isn't the *most* unprofessional, but the personnel rank up there. I'm in a small room off a factory. There are 3 managerial offices, and 36 common-folk of various skill-sets in a variety of single to quad cubicles. No matter where you sit though, because of the layout and location, you've got a hard wall as one wall of your cubicle. Because of that hard wall, everything echoes. I get off the phone, and the guy in the next cubicle makes a comment in response to my phone conversation... I hate that it can be heard and I hate that they do that! These people have no problem yelling from cube to cube to carry on running conversations some of which are actually work-related. There's a lady two cubes away that talks so loud I can clearly hear every phone conversation she has... all work-related but still... Then the one in the next cubicle must have been raised on a farm because there's only one volume setting: LOUD... "HEY MARGE, CAN I GET IN FOR A QUICK APPOINTMENT AFTER WORK TONIGHT?" ... sigh Also that cube is the 'party cube' so that's where all the candy, cake, donuts, and leftovers sits. Anything MzLoud brings in has to have a verbal recipe associated with it at least 10 times during the day, and of course at volume. I've had running conversations over the top of my cube from people in the next one on each side. The weird thing is... the boss sits with an open door closer to this whole fiasco than me. So I wear a pair of Bose noise-cancelling headphones, and crank up Kenny Burrell, Herb Ellis, Wes Montgomery, or Jimmy Smith to the point I can't hear the racket... what the heck, I already have a hearing loss from playing guitar.

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Heterogén adatelérés OWB-vel: ODI EE Enterprise ETL

    - by Fekete Zoltán
    Az elozo ketto blogbejegyzéshez kapcsolódva felmerül a kérdés: Hogyan lehet az Oracle Warehouse Builderrel heterogén adatforrásokat elérni? Ajánlott olvasmány: Oracle Warehouse Builder 11gR2: OWB ETL Using ODI Knowledge Modules Természetesen az OWB az Oracle Database Heterogeneous Services-zel ODBC-vel illetve Oracle Gateway-k alkalmazásával eddig is lehetett mindenféle ODBC kompatibilis továbbá mainframe-es adatbázisokat elérni. Oracle Database Gateways: MS SQL Server, Sybase, Teradata, Informix, ODBC, DRDA, APPC, WebSphere MQ, DB2, DB2/400. A megfelelo Application Adapters megvásárlásával lehet csatlakozni az OWB-vel például a következo forrásokhoz: SAP, Oracle E-Business Suite, Peoplesoft, Siebel, Oracle Customer Data Hub (CDH), Universal Customer Master (UCM), Product Information Management (PIM). Az OWB 11gR2-tol kezdve az OWB tudja használni az Oracle Data Integrator Knowledge moduljait a heterogén adatelérésre, ez JDBC-vel illetve más heterogén elérési módokkal. Ajánlott olvasmány: Oracle Warehouse Builder 11gR2: OWB ETL Using ODI Knowledge Modules Letöltés: Oracle Warehouse Builder. BTW az OWB Java-s kliens szoftver Linux-on és Windows-on is használható. A szerver oldal pedig természetesen az Oracle adatbázisban fut: Solaris, Linux, HP-UX, AIX, Windows operációs rendszereken.

    Read the article

  • eSTEP TechCast - December 2011 - Solaris 11 - The First Cloud OS

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 1st December and would be happy if you could join. Please see below the details for the next TechCast. Date and time: Thursday, 01. December 2011, 11:00 - 12:00 GMT (12:00 - 13:00 CET) Abstract: Solaris 11 contains many new features, particularly around improved virtualisation and network performance. Additionally, new software packaging for fool-proof upgrades, higher availability and reduced maintenance windows replace the former SRV4 packaging and upgrade/patching methods. Target audience: Tech Presales Speaker: Andrew Gabriel Call Info: Call-in-toll-free number: 08006948154 (United Kingdom) Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3 Security Passcode: 9876 Webex Info (Oracle Web Conference) Meeting Number: 597 686 322Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Red Gate in the Community

    - by Nick Harrison
    Much has been said recently about Red Gate's community involvement and commitment to the DotNet community. Much of this has been unduly negative. Before you start throwing stones and spewing obscenities, consider some additional facts: Red Gate's software is actually very good. I have worked on many projects where Red Gate's software was instrumental in finishing successfully. Red Gate is VERY good to the community. I have spoken at many user groups and code camps where Red Gate has been a sponsor. Red Gate consistently offers up money to pay for the venue or food, and they will often give away licenses as door prizes. There are many such community events that would not take place without Red Gate's support. All I have ever seen them ask for is to have their products mentioned or be listed as a sponsor. They don't insist on anyone following a specific script. They don't monitor how their products are showcased. They let their products speak for themselves. Red Gate sponsors the Simple Talk web site. I publish there regularly. Red Gate has never exerted editorial pressure on me. No one has ever told me we can't publish this unless you mention Red Gate products. No one has ever said, you need to say nice things about Red Gate products in order to be published. They have told me, "you need to make this less academic, so you don't alienate too many readers. "You need to actually write an introduction so people will know what you are talking about". "You need to write this so that someone who isn't a reflection nut will follow what you are trying to say." In short, they have been good editors worried about the quality of the content and what the readers are likely to be interested in. For me personally, Red Gate and Simple Talk have both been excellent to work with. As for the developer outrage… I am a little embarrassed by so much of the response that I am seeing. So much of the complaints remind me of little children whining "but you promised" Semantics aside. A promise is just a promise. It's not like they "pinky sweared". Sadly no amount name calling or "double dog daring" will change the economics of the situation. Red Gate is not a multibillion dollar corporation. They are a mid size company doing the best they can. Without a doubt, their pockets are not as deep as Microsoft's. I honestly believe that they did try to make the "freemium" model work. Sadly it did not. I have no doubt that they intended for it to work and that they tried to make it work. I also have no doubt that they labored over making this decision. This could not have been an easy decision to make. Many people are gleefully proclaiming a massive backlash against Red Gate swearing off their wonderful products and promising to bash them at every opportunity from now on. This is childish behavior that does not represent professionals. This type of behavior is more in line with bullies in the school yard than professionals in a professional community. Now for my own prediction… This back lash against Red Gate is not likely to last very long. We will all realize that we still need their products. We may look around for alternatives, but realize that they really do have the best in class for every product that they produce, and that they really are not exorbitantly priced. We will see them sponsoring Code Camps and User Groups and be reminded, "hey this isn't such a bad company". On the other hand, software shops like Red Gate, will remember this back lash and give a second thought to supporting open source projects. They will worry about getting involved when an individual wants to turn over control for a product that they developed but can no longer support alone. Who wants to run the risk of not being able to follow through on their best intentions. In the end we may all suffer, even the toddlers among us throwing the temper tantrum, "BUT YOU PROMISED!" Disclaimer Before anyone asks or jumps to conclusions, I do not get paid by Red Gate to say any of this. I have often written about their products, and I have long thought that they are a wonderful company with amazing products. If they ever open an office in the SE United States, I will be one of the first to apply.

    Read the article

  • Upgrading Visual Studio 2010

    - by Martin Hinshelwood
    I have been running Visual Studio 2010 as my main development studio on my development computer since the RC was released. I need to upgrade that to the RTM, but first I need to remove it. Microsoft have done a lot of work to make this easy, and it works. Its as easy as uninstalling from the control panel. I have had may previous versions of Visual Studio 2010 on this same computer with no need to rebuild to remove all the bits. Figure: Run the uninstall from the control panel to remove Visual Studio 2010 RC Figure: The uninstall removes everything for you.  Figure: A green tick means the everything went OK. If you get a red cross, try installing the RTM anyway and it should warn you with what was not uninstalled properly and you can remove it manually.   Once you have VS2010 RC uninstalled installing should be a breeze. The install for 2010 is much faster than 2008. Which could take all day, and then some on slower computers. This takes around 20 minutes even on my small laptop. I always do a full install as although I have to do c# I sometimes get to use a proper programming language VB.NET. Seriously, there is nothing worse than trying to open a project and the other developer has used something you don't have. Its not their fault. Its yours! Save yourself the angst and install Fully, its only 5.9GB. Figure: I always select all of the options.   Now go forth and develop! Preferably in VB.NET…   Technorati Tags: Visual Studio,VS2010,VS 2010

    Read the article

  • links for 2010-05-21

    - by Bob Rhubart
    Stewart Bryson: Rittman Mead America is Recruiting "We don’t employ any junior consultants," writes Bryson, "so you would have to be highly experienced with some or all of the Oracle BI Stack (OBIEE, OWB, ODI, the Oracle Database, Hyperion), preferably have consulting experience, and excellent client-facing skills. Our consultants also provide all of our training services, and most of us write and speak at conferences, so being articulate and passionate about Oracle BI is another requirement. (tags: jobs employment consultants oracle businessintelligence)

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >