Search Results

Search found 4646 results on 186 pages for 'multi'.

Page 137/186 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Clegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Glegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

  • Customization: It’s Wanted in Enterprise Tech Platforms Too

    - by Mike Stiles
    Did you know that every customer service person does their job the exact same way in every business organization?  And did you know that every business organization cares about the exact same metrics? I hope not, because both those things couldn’t be farther from the truth. And if there are different needs and approaches in different enterprises, it stands to reason technology platforms must become increasingly customizable. Oracle Social Cloud sees that coming and is doing something about it, at least in terms of social media management. Today we introduce Social Station, a customizable user experience workspace within the Oracle Social Relationship Management (SRM) platform. We think a lot about customer-centricity and customer experience around here, and we know our own customers are ready to start moving forward in being able to set up their work environments in the ways that work best for them. That kind of thing increases productivity, helps deliver on social objectives faster, and generally just makes life more pleasant. A recent IDG Enterprise report says that enterprises currently investing in more consumerized, easy-to-use technologies experience a 56% increase in employee productivity and a 46% increase in customer satisfaction. Imagine that. When you make it easier and more pleasant for employees to help customers, more customers get helped and everyone ends up happier. So what does this Social Station do and what does it mean, exactly? It’s an innovative move to take some pretty high-end tech (take a bow developers) and simplify it, making things more intuitive: Drag and drop lets you easily build out and personalize your social workspace with different modules. The new Custom Analytics module can mix and match over 120 metrics with thousands of customizable reporting options. You can check constantly refreshed updates and keep a real-time eye on the numbers you’re trying to move. One-click sharing and annotation in the Custom Analytics module improves sharing and collaboration across teams, departments and executives. Multi-view layout helps you leverage social insights by letting you monitor conversations by network, stream, metric, graph type, date range, and relative time period. The Enhanced Calendar is a better visual representation of content, posts, networks and views, letting you easily toggle between functions and views. The Oracle Social Station sets us up to always be developing & launching additional social modules for you, covering areas like content curation, influencer engagement, and command center creation. Oracle Social Cloud Group VP Meg Bear says, “Consumers today have high expectations of their technology application capabilities and usability, and those expectations don’t stop when they enter their workplaces.” In other words, internal enterprise technology platforms must reflect the personalization and customization being called for in consumer products and marketing. “One size fits all” is becoming an endangered concept. @mikestiles @oraclesocial

    Read the article

  • Introducing the Oracle MDM Blog - Why All MDM Solutions Aren't Equal

    - by ken.pulverman
    Welcome to the Oracle MDM Blog.  Dave Butler, Tony Ouk, and myself - Ken Pulverman, will be bringing you news and information from the world of MDM at Oracle.  Dave is our resident expert with more than 30 years of experience in data and information management. Tony has deep expertise in our Exadata product line which provides a strong hardware synergy with MDM.  I come from Siebel Systems where I helped found the team that built our integration product line and then our Universal Customer Master with is part of our MDM offering at Oracle. I thought I'd hit the ground running with a topic we are going to want to continue to bend your ear about.  We had a recent meeting with Ford Goodman, our head of MDM commercial sales in the US and he was very fired up about and important topic.  He's irked that all MDM solutions get painted with the same brush even though they aren't the same at all. There are companies out there trying to represent frameworks and toolkits as out of the box solutions.  They give you the pleasure (read pain) of doing things like developing your own multi-application data model, building your own web services, or creating your own APIs.  Huh?  What gets sold as flexibility in reality is a barrier to ever going live.  At Siebel Systems we obsessed over the notion of a customer.  Our data model took over 10 years to perfect as defining a customer is a very complex task indeed.  There are divisions, subsidiaries, branches, acquisitions, sites etc., etc., etc..  You'll want to do your homework, but trust me - you aren't going to want to take the time or resource to build these canonical data structures yourself.  And what about APIs?  Again, it sounds flexible.  In reality it's a lot of work. Our DNA at Oracle is to reduce the cost of information technology so we pre-integrate our technology with all of our major applications and pre-build integrations and connectors for all the major systems you work with.  This is tedious work that requires detailed knowledge of the interfaces of all the applications involved.  It is also version specific as the interface features and technology are always changing.  We have a substantial organization to manage this complexity so you don't have to.  Suffice to say, we'd like to help our customers peel back the rhetoric of companies that fly the MDM flag without a real offering that you can quickly benefit from. Please watch this space for more information on this storyline as well as news and information around Oracle MDM.

    Read the article

  • Oracle WebCenter: The Best of the Best

    - by kellsey.ruppel(at)oracle.com
    You may remember that the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  Last week, we provided an overview of Oracle WebCenter, and this week, we'll focus on Convergence and how the new release of Oracle WebCenter is the Best of the Best..Our development team has been working very hard to bring all the best capabilities from each of the existing portal products into one modern user experience platform that provides a robust foundation for moving customers into the future.  Each of the development teams still maintain their existing products to support the current customers,  but they've been tasked with converging their unique best of breed features into the new WebCenter release so that it will meet the broadest set of use cases possible. For example, we've taken the fastest and most scalable portlet engine in the industry with Oracle WebLogic Portal, integrated it within WebCenter, and improved performance further, to deliver even more performance for our customers.  In addition, we've focused on extending the reach of all the different user experience resources so that customers can deliver robust capabilities into their existing portals, applications, composite applications, dashboards, mobile applications, really any channel that requires information.  And finally, we've combined a whole set of community and multi-site capabilities leveraging the pioneering capabilities of Plumtree portal directly into the new WebCenter release.  While at the same time we've built and delivered the new WebCenter release, we've also provided new feature releases of all the existing products.  In this way, customers can continue to gain value out of their existing investments while at the same time have the smoothest path to upgrading to the new WebCenter release. With the new WebCenter release, we are delivering a converged platform to address all portal requirements that have been delivered by different point products in our portal portfolio in the past. Additionally, this release delivers the most modern user experience that goes well beyond the experience the other portal products provided. This is because the new WebCenter release has been built from the ground up with modern technologies around rich clients, SOA, and customizations compared with other portal products whose architecture has been adapted to add capabilities like AJAX, personalization, and social computing.The new WebCenter release addresses the broadest set of use cases using single product set and single architecture spanning extranet sites to social communities. It helps customers manage, maintain and develop one technology set, but leverage it throughout their organization whether it's embedded in an application or a new destination for improved customer and employee productivity. Additionally, the new release of WebCenter leverages the best and most performant features of all the existing portfolio products to deliver the fastest and most scalable portal platform.  Most importantly, it supports the broadest development models spanning from J2EE/Java to HTML/REST to .NET.Keep checking back this week as we provide additional resources and information on how the new release of Oracle WebCenter is the Best of the Best - converging all the best capabilities from each of the existing portal products into one modern user experience platform.

    Read the article

  • Oracle Utilities Application Framework future feature deprecation

    - by Paula Speranza-Hadley
    From time to time, existing functionality is replaced with alternative features to offer greater flexibility and standardization. In Oracle Utilities Application Framework V4.2.0.0.0 the following features are being announced for deprecation in the next release or have been previously announced and are not being delivered with this version of the Oracle Utilities Application Framework: ·         No SQL Server Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with any support for SQL Server. ·         No MPL Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with the Multi-Purpose Listener (MPL) component of the XML Application Integration (XAI) component. Customers using the MPL should migrate to Oracle Service Bus. ·         No provided Crystal Reports/Business Objects Interface – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with a supported Crystal Reports/Business Objects Interface. This facility is now available as downloadable customization for existing or new customers. Responsibility for maintenance and new features is now individual customer's responsibility. ·         XAI Servlet deprecation – The XAI Servlet (xaiserver and classicxai) will be removed in the next release of the Oracle Utilities Application Framework. Customers are encouraged to migrate to the native Web Services Support as outlined in XAI Best Practices whitepaper available from My Oracle Support (Doc Id: 942074.1). ·         ConfigLab deprecation – The ConfigLab facility will be removed in the next release of Oracle Utilities Application Framework for products it is shipped with. Customers are recommended to migrate to the Configuration Migration Assistant which provides the same and more functionality.   ·         Archiving deprecation – The inbuilt Archiving has been removed from Oracle Utilities Application Framework V4.2.0.0.0 or above, for products it is shipped with. Customers considering Archiving solution should migrate to the Information Lifecycle Management based solution provided for your product. ·         DISTRIBUTED batch execution mode deprecation – The DISTRIBUTED execution mode used by the batch component of the Oracle Utilities Application Framework will be deprecated in the next release of the Oracle Utilities Application Framework. Customers using DISTRUBUTED mode should migrate to CLUSTERED mode as outlined in the Batch Best Practices For Oracle Utilities Application Framework Based Products whitepaper available from My Oracle Support (Doc Id: 836362.1). ·         XAI Schema Editor deprecation – The XAI Schema Editor which is a component of the Oracle Utilities Software Development Kit will be removed in the next release of the Oracle Utilities Application Framework. Customers should migrate their existing schemas to Business Object based schemas and use the browser based Schema Editor instead.  

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Apple iPad 2 In April, iPhone 5 in June With New Hardware[Rumours]

    - by Gopinath
    Blogs and news sites are buzzing with the rumours of Apple’s next generation iPad and iPhone devices. These rumours interests the bloggers, geeks and end users of Apple devices as Apple maintains very tight lip on the new features of their upcoming products. The gadget blog Engadget has some very interesting rumours on the release of iPad 2 & iPhone 5 as well the new hardware they are going to have. Lets get into the details if you love to read the rumours of high profile blogs iPad 2 Release Date and Specs Apple seems to be all set to release iPad 2 in April, that is almost an year after the release of first iPad. It’s common for Apple to enjoy an one year long time to release a new version of their products. So if at all the rumours are to be believed, I can place an order of iPad 2 in April. Just like many of you out there, I’m also holding my iPad buying instinct and waiting for iPad 2 as it’s going to have at the minimum retina display,  Facetime features and few game changing features in Apple’s style. The report claims, iPad 2 will have a front and back cameras retina display SD Card slot (seems to be no USB) a dual GSM / CDMA chipset, that lets you use it with both GSM(AT &T, Airte) and CDMA(Verizon, Reliance) telecom providers iPhone 5 Release Date and Specs When it comes to iPhone 5 information, the rumour claims that the new iPhone is a completed redesigned device and it’s slated to release in summer of United States(i.e. June 2011). The device is also being tested by senior Apple executives right inside the campus and strictly not allowed to carry it outside. This restriction is to make sure that iPhone 5 will not land land up in a bar and then in the hands of geek blogs like how it happened with iPhone 4 last year. When it comes to the hardware of iPhone 5 Apple’s new A5 CPU (a Cortex A9-based, multi-core chip) a dual GSM / CDMA chipset, that lets you use it with both GSM(AT &T, Airte) and CDMA(Verizon, Reliance) telecom providers via Engadget and cc image credit flickr/mr-blixt This article titled,Apple iPad 2 In April, iPhone 5 in June With New Hardware[Rumours], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification

    - by pinaldave
    Here is the conversion I had right after I had posted my earlier blog post about Download Microsoft SQL Server 2012 RTM Now. Rajesh: So SQL Server is available for me to download? Pinal: Yes, sure check the link here. Rajesh: It is trial do you know when it will be available for everybody? Pinal: I think you mean General Availability (GA) which is on April 1st, 2012. Rajesh: I want to have head start with SQL Server 2012 examination and I want to know every single Exam 70-461: Querying Microsoft SQL Server 2012 This exam is intended for SQL Server database administrators, implementers, system engineers, and developers with two or more years of experience who are seeking to prove their skills and knowledge in writing queries. Exam 70-462: Administering Microsoft SQL Server 2012 Databases This exam is intended for Database Professionals who perform installation, maintenance, and configuration tasks as their primary areas of responsibility. They will often set up database systems and are responsible for making sure those systems operate efficiently. Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 The primary audience for this exam is Extract Transform Load (ETL) and Data Warehouse Developers.  They are most likely to focus on hands-on work creating business intelligence (BI) solutions including data cleansing, ETL, and Data Warehouse implementation. Exam 70-464: Developing Microsoft SQL Server 2012 Databases This exam is intended for database professionals who build and implement databases across an organization while ensuring high levels of data availability. They perform tasks including creating database files, creating data types and tables,  planning, creating, and optimizing indexes, implementing data integrity, implementing views, stored procedures, and functions, and managing transactions and locks. Exam 70-465: Designing Database Solutions for Microsoft SQL Server 2012 This exam is intended for database professionals who design and build database solutions in an organization.  They are responsible for the creation of plans and designs for database structure, storage, objects, and servers. Exam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012 The primary audience for this exam is BI Developers.  They are most likely to focus on hands-on work creating the BI solution including implementing multi-dimensional data models, implementing and maintaining OLAP cubes, and creating information displays used in business decision making Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 The primary audience for this exam is the BI Architect.  BI Architects are responsible for the overall design of the BI infrastructure, including how it relates to other data systems in use. Looking at Rajesh’s passion, I am motivated too! I may want to start attempting the exams in near future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Customer Hub - Directions, Roadmap and Customer Success

    - by Mala Narasimharajan
     By Gurinder Bahl With less than a week from OOW 2012, I would like to introduce you all to the core Oracle Customer MDM Strategy sessions. Fragmentation of customer data across disparate systems prohibits companies from achieving a complete and accurate view of their customers. Oracle Customer Hub provide a comprehensive set of services, utilities and applications to create and maintain a trusted master customer system of record across the enterprise. Customer Hub centralizes customer data from disparate systems across your enterprise into a master repository. Existing systems are integrated in real-time or via batch with the Hub, allowing you to leverage legacy platform investments while capitalizing on the benefits of a single customer identity. Don’t miss out on two sessions geared towards Oracle Customer Hub:   1) Attend session CON9747 - Turn Customer Data into an Enterprise Asset with Oracle Fusion Customer Hub Applications at Oracle Open World 2012 on Monday, Oct 1st, 10:45 AM - 11:45 AM @ Moscone West – 2008. Manouj Tahiliani, Sr. Director MDM Product Management will provide insight into the vision of Oracle Fusion Customer Hub solutions, and review the roadmap. You will discover how Fusion Customer MDM can help your enterprise improve data quality, create accurate and complete customer information,  manage governance and help create great customer experiences. You will also understand how to leverage data quality capabilities and create a sophisticated customer foundation within Oracle Fusion Applications. You will also hear Danette Patterson, Group Lead, Church Pension Group talk about how Oracle Fusion Customer Hub applications provide a modern, next-generation, multi-domain foundation for managing customer information in a private cloud. 2)  Don't miss session  CON9692 - Customer MDM is key to Strategic Business Success and Customer Experience Management at Oracle Open World 2012 on Wednesday, October 3rd 2012 from 3:30-4:30pm @ Westin San Francisco Metropolitan 1. JP Hurtado, Director, Customer Systems, will provide insight on how RCCL overcame challenges of data quality, guest recognition & centralized customer view to provide consolidated customer view to multiple reservation, CRM, marketing, service, sales, data warehouse and loyalty systems. You will learn how Royal Caribbean Cruise Lines (RCCL), which has over 30 million customer and maintain multiple brands, leveraged Oracle Customer Hub (Siebel UCM) as backbone to customer data management strategy for past 5 years. Gurinder Bahl from MDM Product Management will provide an update on Oracle Customer Hub strategy, what we have achieved since last Open World and our future plans for the Oracle Customer Hub. You will learn about Customer Hub Data Quality capabilities around data analysis, cleansing, matching, address validation as well as reporting and monitoring capabilities. The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, and Conference sessions. You can see an overview of MDM sessions here.  Looking forward to see you at Open World, the perfect opportunity to learn about cutting edge Oracle technologies. 

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • SharpDX and game engines, back to zero?

    - by Baboon
    I'm a desktop developer (I mainly do WPF for a living) but I want to make games as a hobby. So a few months ago, I started reading blogs, gamedevSE, you name it. I understand in the C++ DirectX world, you have engines such as Unity3D with designers and whatnot, but as much as I'm ok to spend months understanding game development, I'd prefer to stay with my comfy C#. So I thought I'd develop my first games in /C#/DirectX through SharpDX But then, I can't use game engines anymore, since they're made for C++DirectX and not SharpDX. (ok, I could do P/Invokes but that defeats the purpose of SharpDX). I do know about XNA, and I also do know I can't publish to the marketplace with it (and quite frankly, I don't really want to learn an API that is in jeopardy). So how do you conceal writing games in C# and using existing game engines instead of reinventing the wheel? wait for ports? So I've found out the following: After doing the first tutorials of MOgre and digging around, it seems MOgre gives you the worse of both worlds: . You can't port it with Mono because it directly references a C++/CLI dll (Ogre) and C++/CLI isn't supported by Mono. And since it's not C++ itself, you can't make a pure build. Which, as far as I understand, means I'd be stuck on windows (and not even WinRT/Metro compatible), without capability of porting anything to mobiles or other OS (Mac/Linux). Even though it looks really nice to develop with MOgre, I'd like to learn something a little more open to future broadening. On the other hand, MonoGame seems to be a rewrite of XNA with SharpDX which sounds very promising: Mono allows me to easily port my games to other platforms, mobiles included. SharpDX allows me to access the latest DirectX versions XNA would be my first choice if only MS showed some hope for the future It really looks like MonoGame is nothing else than XNA on SharpDX. Axiom looks good too but I lack info on the subject (and pages with poor design don't give me a good feeling about an API...) XNA with SunBurn looks good: It should be portable to Mono (can anyone with experience give us feedback on that?), thus multi-platform. It's then marketplace-able since Mono itself is. Did I miss something or are 2) and 4) my best options (aside from the fact Mono doesn't support any XAML)?

    Read the article

  • Best advice for setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g

    - by Jason Williamson
    I'm excited to say that we've released our next generation of Re-hosting in 11g. In fact I'm doing some hands-on labs now for our Systems Integrators in Italy in a couple of weeks and targeting Latin America next month. If you are an SI, or Rehosting firm and are looking to become an Oracle Partner or get a better understanding of Tuxedo and how to use the workbench for rehosting...drop me a line. Oracle Tuxedo Application Runtime for CICS and Batch 11g provides a CICS API emulation and Batch environment that exploits the full range of Oracle Tuxedo's capabilities. Re-hosted applications run in a multi-node, grid environment with centralized production control. Also, enterprise integration of CICS application services benefits from an open and SOA-enabled framework. Key features include: CICS Application Runtime: Can run IBM CICS applications unchanged in an application grid, which enables the distribution of large workloads across multiple processors and nodes. This simplifies CICS administration and can scale to over 100,000 users and over 50,000 transactions per second. 3270 Terminal Server: Protects business users from change through support for tn3270 terminal emulation. Distributed CICS Resource Management: Simplifies deployment and administration by allowing customers to run CICS regions in a distributed configuration. Batch Application Runtime: Provides robust IBM JES-like job management that enables local or remote job submissions. In addition, distributed batch initiators can enable parallelization of jobs and support fail-over, shortening the batch window and helping to meet stringent SLAs. Batch Execution Environment: Helps to run IBM batch unchanged and also supports JCL functionality and all common batch utilities. Oracle Tuxedo Application Rehosting Workbench 11g provides a set of automated migration tools integrated around a central repository. The tools provide high precision which results in very low error rates and the ability to handle large applications. This enables less expensive, low-risk migration projects. Key capabilities include: Workbench Repository and Cataloguer: Ensures integrity of the migrated application assets through full dependency checking. The Cataloguer generates and maintains all relevant meta-data on source and target components. File Migrator: Supports reliable migration of datasets and flat files to an ISAM or Oracle Database 11g. This is done through the automated migration utilities for data unloading, reloading and validation. It also generates logical access functions to shield developers from data repository changes. DB2 Migrator: Similarly, this tool automates the migration of DB2 schema and data to Oracle Database 11g. COBOL Migrator: Supports migration of IBM mainframe COBOL assets (OLTP and Batch) to open systems. Adapts programs for compiler dialects and data access variations. JCL Migrator: Supports migration of IBM JCL jobs to a Tuxedo ART environment, maintaining the flow and characteristics of batch jobs.

    Read the article

  • Cutting Paper through Visualization and Collaboration

    - by [email protected]
    It's hard not to hear about "Going Green" these days. Many are working to be more environmentally conscious in their personal lives, and this has extended to the corporate world as well. I know I'm always looking for new ways. Environmental responsibility is important at Oracle too, and we have an entire section of our website dedicated to our solutions around the Eco-Enterprise. You can check it out here: http://www.oracle.com/green/index.html Perhaps the biggest and most obvious challenge in the world of business is the fact that we use so much paper. There are many good reasons why we print today too. For example: Printing off a document, spreadsheet, or CAD design that will be reviewed and marked up while on a plane Having a printout of a facility when a field engineer performs on-site maintenance During a multi-party design review where a number of people will review a drawing in a meeting room, scribbling onto a large scale drawing print to provide their collaborative comments These are just a few potential use cases, and they're valid ones. However, there's a way in which you can turn these paper processes into digital ones. AutoVue allows you to view, mark-up, and collaborate on all the data you would print. Indeed, this is the core of what AutoVue does. So if we take the examples above, we could address each as follows: Allow you to view the document, spreadsheet, or CAD drawing in AutoVue on your laptop. Even if you originally had this data vaulted in some time of system of record (like an ECM solution) and view your data from there, AutoVue allows you to "Work Offline" and take the documents you need to review on your laptop. From there, the many annotation tools in AutoVue will give you what you need to comment upon the documents that you are reviewing. The challenge with the mobile workforce is always access to information. People who perform maintenance and repair operations often are in locations with little to no Internet connectivity. However, technology is coming to these people in the form of laptops, tablet PCs, and other portable devices too. AutoVue can address situations with limited bandwidth through our streaming technology for viewing, meaning that the most up to date information can be pulled up from the central server - without the need for large data transfer. When there is no connectivity at all, the "Work Offline" option will handle this. For a design review session, the Real-Time Collaboration capabilities of AutoVue will let all the participants view the same document in a synchronized view, allowing each person to be able to mark-up the document at the same time. Since this is done in a web-based manner, not only is it not necessary to print the document, but you benefit by reducing the travel needed for these sessions. Not only are trees spared, but jet fuel as well. There are many steps involved with "Going Green", but each step is a necessary one. What we do today will directly influence our future generations, and we're looking to help.

    Read the article

  • How to setup Dual Head with "radeon" driver for R770?

    - by user1709408
    I want to make dual head setup without xrandr but with Xinerama. I put "Screen 1" line into xorg.conf, but card still show identical output on DVI-2 and DVI-3 It is important to use xinerama for me (to glue three monitors), that's why i decide not to use ranrd (randr is incompatible with xinerama as i read somewhere) Here is my videocard (HD 4850 X2): lspci | grep R700 03:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI R700 [Radeon HD 4850] 04:00.0 Display controller: Advanced Micro Devices [AMD] nee ATI R700 [Radeon HD 4850] Here is how monitors are connected: grep "DVI" /var/log/Xorg.0.log [ 1210.002] (II) RADEON(0): Output DVI-0 using monitor section Monitor0 [ 1210.048] (II) RADEON(0): Output DVI-1 has no monitor section [ 1210.079] (II) RADEON(0): EDID for output DVI-0 [ 1210.080] (II) RADEON(0): Printing probed modes for output DVI-0 [ 1210.128] (II) RADEON(0): EDID for output DVI-1 [ 1210.128] (II) RADEON(0): Output DVI-0 connected [ 1210.128] (II) RADEON(0): Output DVI-1 disconnected [ 1210.128] (II) RADEON(0): Output DVI-0 using initial mode 1920x1200 [ 1210.160] (II) RADEON(1): Output DVI-2 using monitor section Monitor2 [ 1210.215] (II) RADEON(1): Output DVI-3 has no monitor section [ 1210.246] (II) RADEON(1): EDID for output DVI-2 [ 1210.247] (II) RADEON(1): Printing probed modes for output DVI-2 [ 1210.299] (II) RADEON(1): EDID for output DVI-3 [ 1210.300] (II) RADEON(1): Printing probed modes for output DVI-3 [ 1210.300] (II) RADEON(1): Output DVI-2 connected [ 1210.300] (II) RADEON(1): Output DVI-3 connected [ 1210.300] (II) RADEON(1): Output DVI-2 using initial mode 1920x1200 [ 1210.300] (II) RADEON(1): Output DVI-3 using initial mode 1920x1200 Here is my /etc/X11/xorg.conf Section "ServerFlags" Option "RandR" "0" Option "Xinerama" "1" EndSection Section "ServerLayout" Identifier "Three Head Layout" Screen "MyPrecious0" Screen "MyPrecious2" RightOf "MyPrecious0" Screen "MyPrecious3" LeftOf "MyPrecious0" EndSection Section "Screen" Identifier "MyPrecious0" Monitor "Monitor0" Device "Device300" EndSection Section "Screen" Identifier "MyPrecious2" Monitor "Monitor2" Device "Device400" EndSection Section "Screen" Identifier "MyPrecious3" Monitor "Monitor3" Device "Device401" EndSection Section "Device" Identifier "Device300" BusID "PCI:3:0:0" Screen 0 Driver "radeon" EndSection Section "Device" Identifier "Device400" BusID "PCI:4:0:0" Screen 0 Driver "radeon" EndSection Section "Device" Identifier "Device401" BusID "PCI:4:0:0" Screen 1 Driver "radeon" EndSection Section "Monitor" Identifier "Monitor0" EndSection Section "Monitor" Identifier "Monitor2" EndSection Section "Monitor" Identifier "Monitor3" EndSection I tried to switch to vesa driver (didn't work for me) I tried to add options like Option "ZaphodHeads" "DVI-2" and Option "ZaphodHeads" "DVI-3" into sections "Device 400" and "Device 401" (this didn't help because "ZaphodHeads" option is for ranrd, and randr is disabled by decision) I tried to merge sections "Device 400" and "Device 401" into one section and add Option "ZaphodHeads" "DVI-2,DVI-3" (see comment about randr above) single section setup helps to change log line RADEON(1): Output DVI-3 has no monitor section into RADEON(1): Output DVI-3 using monitor section Monitor3 but nothing was enough to switch from screen cloning to separate screens. This problem (lack of documentation on radeon driver) is similar to these: Radeon display driver clones monitors while using Xinerama (moderators decision to close that problem was wrong) Ubuntu 12.10 multi-monitor setup isn't working The problem is solvable, because this hardware worked as three headed for me earlier with gentoo/xorg-server-1.3 Xorg -configure creates setup for the first monitor on the first GPU Please don't advise to use fglrx/aticonfig/amdcccle (this goes against my religion beliefs)

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • RTS Movement + Navigation + Destination

    - by Oliver Jones
    I'm looking into building my own simple RTS game, and I'm trying to get my head around the movement of single, and multi selected units. (Developing in Unity) After much research, I now know that its a bigger task than I thought. So I need to break it down. I already have an A* navigation system with static obstacles taken into account. I don't want to worry about dynamic local avoidance right now. So I guess my first break down question would be: How would I go about moving mutli units to the same location. Right now - my units move to the location, but because they're all told to go to the same location, they start to 'fight' over one another to get there. I think theres two paths to go down: 1) Give each individual unit a separate destination point that is close to the 'master' destination point - and get the units to move to that. 2) Group my selected units in a flock formation, and move that entire flock group towards the destination point. Question about each path: 1a) How can I go about finding a suitable destination point that is close to the master destination? What happens if there isn't a suitable destination point? 1b) Would this be more CPU heavy? As it has to compute a path for each unit? (40 unit count). 2a) Is this a good idea? Not giving the units themselves a destination, but instead the flock (which holds the units within). The units within the flock could then maintain a formation (local avoidance) - though, again local avoidance is not an issue at this current time. 2b) Not sure what results I would get if I have a flock of 5 units, or a flock of 40 units, as the radius would be greater - which might mess up my A* navigation system. In other words: A flock of 2 units will be able to move down an alleyway, but a flock of 40 wont. But my nav system won't take that into account. I would appreciate any feedback. Kind regards, Ollie Jones

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Windows Azure Learning Plan - Architecture

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with what an Architect needs to know about Windows Azure.   General Architectural Guidance Overview and general  information about Azure - what it is, how it works, and where you can learn more. Cloud Computing, A Crash Course for Architects (Video) http://www.msteched.com/2010/Europe/ARC202 Patterns and Practices for Cloud Development http://msdn.microsoft.com/en-us/library/ff898430.aspx Design Patterns, Anti-Patterns and Windows Azure http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/27/design-patterns-anti-patterns-and-windows-azure.aspx Application Patterns for the Cloud http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx Architecting Applications for High Scalability (Video) http://www.msteched.com/2010/Europe/ARC309 David Aiken on Azure Architecture Patterns (Video) http://blogs.msdn.com/b/architectsrule/archive/2010/09/09/arcast-tv-david-aiken-on-azure-architecture-patterns.aspx Cloud Application Architecture Patterns (Video) http://blogs.msdn.com/b/bobfamiliar/archive/2010/10/19/cloud-application-architecture-patterns-by-david-platt.aspx 10 Things Every Architect Needs to Know about Windows Azure http://geekswithblogs.net/iupdateable/archive/2010/10/20/slides-and-links-for-windows-azure-platform-session-at-software.aspx Key Differences Between Public and Private Clouds http://blogs.msdn.com/b/kadriu/archive/2010/10/24/key-differences-between-public-and-private-clouds.aspx Microsoft Application Platform at a Glance http://blogs.msdn.com/b/jmeier/archive/2010/10/30/microsoft-application-platform-at-a-glance.aspx Windows Azure is not just about Roles http://vikassahni.wordpress.com/2010/11/17/windows-azure-is-not-just-about-roles/ Example Application for Windows Azure http://msdn.microsoft.com/en-us/library/ff966482.aspx Implementation Guidance Practical applications for the architect to consider 5 Enterprise steps for adopting a Platform as a Service http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0 Performance-Based Scaling in Windows Azure http://msdn.microsoft.com/en-us/magazine/gg232759.aspx Windows Azure Guidance for the Development Process http://blogs.msdn.com/b/eugeniop/archive/2010/04/01/windows-azure-guidance-development-process.aspx Microsoft Developer Guidance Maps http://blogs.msdn.com/b/jmeier/archive/2010/10/04/developer-guidance-ia-at-a-glance.aspx How to Build a Hybrid On-Premise/In Cloud Application http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx A Common Scenario of Multi-instances in Windows Azure http://blogs.msdn.com/b/windows-azure-support/archive/2010/11/03/a-common-scenario-of-multi_2d00_instances-in-windows-azure-.aspx Slides and Links for Windows Azure Platform Best Practices http://geekswithblogs.net/iupdateable/archive/2010/09/29/slides-and-links-for-windows-azure-platform-best-practices-for.aspx AppFabric Architecture and Deployment Topologies guide http://blogs.msdn.com/b/appfabriccat/archive/2010/09/10/appfabric-architecture-and-deployment-topologies-guide-now-available-via-microsoft-download-center.aspx Windows Azure Platform Appliance http://www.microsoft.com/windowsazure/appliance/ Integrating Cloud Technologies into Your Organization Interoperability with Open Source and other applications; business and cost decisions Interoperability Labs at Microsoft http://www.interoperabilitybridges.com/ Windows Azure Service Level Agreements http://www.microsoft.com/windowsazure/sla/

    Read the article

  • Oracle and Eloqua Welcome Compendium’s Content Marketing

    - by Mike Stiles
    Yesterday, Oracle announced its acquisition of Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers' lifecycle. Why? Because every part of the above paragraph speaks to where modern marketing is and where it’s headed. Customers have now been empowered, thanks to the Internet and particularly social, with access to almost limitless amounts of information about companies and products. This includes the especially influential voices of friends and objective acquaintances that have experience with the product or brand. With mobile, this info is available instantly in the palm of their hand. All of this research and influence mind you, is taking place long before a prospect will ever engage with the brand itself or one of its sales reps. So how does a brand effectively insert itself into these conversations and this flow of the customer journey? Now, more than ever, marketers must deliver relevant and engaging content across multiple channels and throughout the entire customer journey to be useful, helpful, and influential. Compendium has a data-driven content marketing platform that lines up relevant content with customer data and personas so brands can accelerate the conversion of prospects. Now think about combining that with the Oracle Eloqua Marketing Cloud, part of Oracle's comprehensive CX solution. Marketers will be able to automate content delivery across channels by aligning persona-based content with customers' digital body language. Better customer engagement, improved sales lead quality, better return on marketing investment, and higher customer loyalty. Now we’re talking. Does data-driven content marketing have an impact? Compendium customer CVENT is a SaaS company specializing in meetings management tech. They wanted to increase leads & ad performance on their blog and dramatically increase their content. They also wanted to manage the creation, workflow, promotion and distribution of that content. With Compendium, CVENT created over 9,000 content elements, and sales-ready leads grew 325%. So Oracle Eloqua helps you target audiences, know buyers, and automate multi-channel marketing campaigns. Compendium lets you plan, publish, manage and measure content across content types and channels. Now kick it up yet another notch with Oracle’s Analytics, Big Data and Social solutions, and you’re using your marketing dollars to reach the right people in the right place at the right time with the right content. And as if that weren’t enough, your customers will love you for it. @mikestiles

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • Silverlight Cream for April 18, 2010 -- #840

    - by Dave Campbell
    In this Issue: CrocusGirl, Giorgetti Alessandro(-2-), smartyP, Pete Brown, David Poll, David Anson, and Bill Reiss. Shoutouts: Yasser Makram has a post up discussing Human Centered ALM with Telerik TeamPulse and Team Foundation Server. I saw this demo'd at DevConnnections and it definitely deserves a look. Shawn Wildermuth posted his materials from DevConnections all on one post: Back from DevConnections with SourceCode Shawn Wildermuth also posted an Updated RIA Services + MVVM Example Laurent Bugnion announced a Small change in MVVM Light Toolkit templates for Blend 4 RC Laurent Bugnion also announced Crowdsourcing MVVM Light Toolkit support The Expression Blend and Design Blog announced Expression Blend 4 Release Candidate Available! Dan Wahlin posted Slides and Code from my Silverlight MVVM Talk at DevConnections From SilverlightCream.com: Windows Phone 7 Design Notes – Part#1: Metro Resources CrocusGirl has blogged about WP7 and the Metro design concept. She has a bunch of resources up and information about Metro and the design methodology. Stay tuned for Part 2. Silverlight, M-V-VM ... and IoC - part 1 Giorgetti Alessandro has part 1 of a multi-parter up on IoC and MVVM for LOB apps in Silverlight ... a pretty quick into to MVVM. Silverlight, M-V-VM … and IoC – part 2 Giorgetti Alessandro also posted part 2 of his series, and this one digs deeper into the code and discusses what goes into the view and the model. Using the Facebook Developer Toolkit With Windows Phone 7 smartyP has a post addressing using the Facebook Developer toolkit with WP7... it took some hacking, and he explains it, and provides it for download. Silverlight and WPF Tip: Fitting items in a ListBox Having trouble fitting items into a Listbox in Silverlight or WPF without getting horizontal scrollbars? Pete Brown has a solution for you in 4 steps. Making printing easier in Silverlight 4 David Poll has a great detailed post up about printing in SL4, taking it to building a higher-level API that allows printing of collections... all demos and source included. Detailed information about the Silverlight Toolkit's new stacked series support David Anson details the improvements to Data Visualization in the Toolkit release from last week. Space Rocks game step 9: the asteroid sprite Bill Reiss has his latest game episode up and this time he's putting asteroid sprites in play. No placement, movement, or collisions yet, but it's a beginning. And, he's updated all his code to Silverlight 4. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >