Search Results

Search found 10755 results on 431 pages for 'cluster shared volume'.

Page 368/431 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Windows Phone–A beautiful phone which I admire but I don’t recommend to friends and family

    - by Gopinath
    Microsoft’s Windows Phones are the most beautiful phones I’ve seen. Look at the photo which Microsoft shared on their Facebook page today. It’s gorgeous. Windows Phones come in vibrant colors and the user interface is very lively. When you keep an iPhone, Android Phone & a Windows Phone on a table, Windows Phone definitely stands out. Android and iOS interfaces are routine – a bunch of apps icons arranged in rows and multiple screens. Windows Phone is very different, the live tiles concept mesmerizes us. I love Windows Phone, but neither I buy one nor I recommend to family/friends! Why? Because it does not have all the Apps I need. Microsoft advertises that Windows Phone has 100K apps on its Windows Market Place. It’s true, there are 100K+ apps available for Windows Phone but not many of them are really useful and most of the popular Apps I use on Android are not available. When I say this to my friends at Microsoft, they don’t agree and one of them asked me list the apps that are not available. For him today I spent an hour quickly scanning through the apps installed on my Google Nexus and searched for same apps on Windows Market Place. As expected many of them are not available. Here is the list of my favorite Android apps that are not available for Windows Phone Mint – I use this app more than any of the Banking Apps I’ve installed on my mobile. It’s one app to keep a tab on all the expenses and income, the best money management and tracking app. Google Chrome – Web without Google Chrome is too boring, either on Desktop or on mobile. IE is too heavy and Firefox is loosing its grip. Chrome is the new darling of web. Pulse, Flipboard – Flipboard and Pulse are one of the best apps for reading news and following content of favorite blogs. Dropbox – Sync content across devices and provides access to your content on any device.It really does not matter what is your gadget – mobile, tablet or computer; Dropbox lets you access your content. GMail, Google Maps – Should I say how important are these two apps in our day to day life!! Vonage Extension – For around 30 bucks a month, Vonage provide landline service in USA + unlimited calls to India and many other countries + Vonage Extension App that lets Android/iOS mobile to make unlimited international calls for free. Without Vonage Extension app, I’m almost cutoff from my family and friends back home in India. Instagram – The most popular camera app used from a common man to celebrities. Raaga, Dhingana  – Music is part and parcel of life and these two apps are the most like popular apps to listen to Indian music. Quora – Quora is the place where most of the sensible discussions happen on web. Google Analytics, Google Adsense – I’m a blogger and these two apps mean a lot to me The list goes on and on! There are many useful apps that are not available on Windows Phone – TuneIn, MyTWC, Chrome To Phone, Google Voice, etc. Without all these apps, Windows Phone is just another old Nokia phone. Even though Windows Phone is the most beautiful phone, it needs Apps to attract customers. Without apps a smartphone is more or less a dumb feature phone which we loved to use before release of iPhone. Wish in an year or two the beautiful Windows Phone may have all the missing Apps. When it happens I’ll buy a phone for myself and recommend it to my family & friends. But till then I prefer to stay away.

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • Agile Database Techniques: Effective Strategies for the Agile Software Developer – book review

    - by DigiMortal
       Agile development expects mind shift and developers are not the only ones who must be agile. Every chain is as strong as it’s weakest link and same goes also for development teams. Agile Database Techniques: Effective Strategies for the Agile Software Developer by Scott W. Ambler is book that calls also data professionals to be part of agile development. Often are DBA-s in situation where they are not part of application development and later they have to survive large set of applications that all use databases different way. Of course, only some of these applications are not problematic when looking what database server has to do to serve them. I have seen many applications that rape database servers because developers have no clue what is going on in database (~3K queries to database per web application request – have you seen something like this? I have…) Agile Database Techniques covers some object and database design technologies and gives suggestions to development teams about topics they need help or assistance by DBA-s. The book is also good reading for DBA-s who usually are not very strong in object technologies. You can take this book as bridge between these two worlds. I think teams that build object applications that use databases should buy this book and try at least one or two projects out with Ambler’s suggestions. Table of contents Foreword by Jon Kern. Foreword by Douglas K. Barry. Acknowledgments. Introduction. About the Author. Part One: Setting the Foundation. Chapter 1: The Agile Data Method. Chapter 2: From Use Cases to Databases — Real-World UML. Chapter 3: Data Modeling 101. Chapter 4: Data Normalization. Chapter 5: Class Normalization. Chapter 6: Relational Database Technology, Like It or Not. Chapter 7: The Object-Relational Impedance Mismatch. Chapter 8: Legacy Databases — Everything You Need to Know But Are Afraid to Deal With. Part Two: Evolutionary Database Development. Chapter 9: Vive L’ Évolution. Chapter 10: Agile Model-Driven Development (AMDD). Chapter 11: Test-Driven Development (TDD). Chapter 12: Database Refactoring. Chapter 13: Database Encapsulation Strategies. Chapter 14: Mapping Objects to Relational Databases. Chapter 15: Performance Tuning. Chapter 16: Tools for Evolutionary Database Development. Part Three: Practical Data-Oriented Development Techniques. Chapter 17: Implementing Concurrency Control. Chapter 18: Finding Objects in Relational Databases. Chapter 19: Implementing Referential Integrity and Shared Business Logic. Chapter 20: Implementing Security Access Control. Chapter 21: Implementing Reports. Chapter 22: Realistic XML. Part Four: Adopting Agile Database Techniques. Chapter 23: How You Can Become Agile. Chapter 24: Bringing Agility into Your Organization. Appendix: Database Refactoring Catalog. References and Suggested Reading. Index.

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • Restore Your PC from Windows Home Server

    - by Mysticgeek
    If your computer crashes or you get a virus infection that makes it unrecoverable, doing a clean install can be a hassle, let alone getting your data back. If you’re backing up your computers to Windows Home Server, you can completely restore them to the last successful backup. Note: For this process to work you need to verify the PC you want to restore is connected to your network via Ethernet. If you have it connected wirelessly it won’t work. Restore a PC from Windows Home Server On the computer you want to restore, pop in the Windows Home Server Home Computer Restore disc and boot from it. If you don’t have one already made, you can easily make one following these instructions. We have also included the link to the restore disc below. Boot from the CD then select if your machine has 512MB or RAM or more. The disc will initialize… Then choose your language and keyboard settings. Hopefully if everything goes correctly, your network card will be detected and you can continue. However, if it doesn’t like in our example, click on the Show Details button. In the Detect Hardware screen click on the Install Drivers button. Now you will need to have a USB flash drive with the correct drivers on it. It has to be a flash drive or a floppy (if you happen to still have one of those) because you can’t take out the Restore CD. If you want to make sure you have the correct drivers on the USB flash drive, open the Windows Home Server Console on another computer on your network. In the Computers and Backup section right-click on the computer you want to restore and select View Backups. Select the backup you want to restore from and click the Open button in the Restore or view Files section. Now drag the entire contents of the folder named Windows Home Server Drivers for Restore to the USB flash drive. Back to the machine you’re trying to restore, insert the USB flash drive with the correct drivers and click the Scan button. Wait a few moments while the drivers are found then click Ok then Continue.   The Restore Computer Wizard starts up… Enter in your home server password and click Next. Select the computer you want to restore. If it isn’t selected by default you can pull it up from the dropdown list under Another Computer. Make certain you’re selecting the correct machine. Now select the backup you want to restore. In this example we only have one but chances are you’ll have several. If you have several backups to choose from, you might want to check out the details for them. Now you can select the disk from backup and and restore it to the destination volume. You might need to initialize a disk, change a drive letter, or other disk management tasks, if so, then click on Run Disk Manger. For example we want to change the destination drive letter to (C:).   After you’ve made all the changes to the destination disk you can continue with the restore process. If everything looks correct, confirm the restore configuration. If you need to make any changes at this point, you can still go back and make them. Now Windows Home Server will restore your drive. The amount of time it takes will vary depend on the amount of data you have to restore, network connection speed, and hardware. You are notified when the restore successfully completes. Click Finish and the PC will reboot and be restored and should be working correctly. All the updates, programs, and files will be back that were saved to the last successful backup. Anything you might have installed after that backup will be gone. If you have your computers set to backup every night, then hopefully it won’t be a big issue.   Conclusion Backing up the computers on your network to Windows Home Server is a valuable tool in your backup strategy. Sometimes you may only need to restore a couple files and we’ve covered how to restore them from backups on WHS and that works really well. If the unthinkable happens and you need to restore the entire computer, WHS makes that easy too.  Download Windows Home Server Home Computer Restore CD Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscGMedia Blog: Setting Up a Windows Home ServerShare Ubuntu Home Directories using SambaInstalling Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • How to I fix software center after installing the Linux Mint MATE desktop?

    - by tinuz
    I installed the MATE desktop using this manual but now i can't open my Ubuntu Software Center and can't open the settings from the update manager. I removed mate desktop but it doesn't fix the problem, i also reinstalled the software center, software-properties-gtk and software-properties-common using: sudo apt-get update; sudo apt-get --purge --reinstall install software-center software-properties-common software-properties-gtk. But when using this line i get the following error: Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 3 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/735 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 304824 files and directories currently installed.) Preparing to replace software-center 5.0.2 (using .../software-center_5.0.2_all.deb) ... Unpacking replacement software-center ... Preparing to replace software-properties-common 0.81.13.1 (using .../software-properties-common_0.81.13.1_all.deb) ... Unpacking replacement software-properties-common ... Preparing to replace software-properties-gtk 0.81.13.1 (using .../software-properties-gtk_0.81.13.1_all.deb) ... Unpacking replacement software-properties-gtk ... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Unknown media type in type 'interface/x-winamp-skin' Setting up software-center (5.0.2) ... Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in <module> from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 59, in <module> from softwarecenter.db.database import parse_axi_values_file File "/usr/share/software-center/softwarecenter/db/database.py", line 26, in <module> from softwarecenter.db.application import Application File "/usr/share/software-center/softwarecenter/db/application.py", line 25, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 165, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 148, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named LinuxMint Setting up software-properties-common (0.81.13.1) ... Setting up software-properties-gtk (0.81.13.1) ... $ Is there a way to fix this problem without having to reinstall Ubuntu 11.10?? thanks in advance tinuz

    Read the article

  • Localization with ASP.NET MVC ModelMetadata

    - by kazimanzurrashid
    When using the DisplayFor/EditorFor there has been built-in support in ASP.NET MVC to show localized validation messages, but no support to show the associate label in localized text, unless you are using the .NET 4.0 with Mvc Future. Lets a say you are creating a create form for Product where you have support both English and German like the following. English German I have recently added few helpers for localization in the MvcExtensions, lets see how we can use it to localize the form. As mentioned in the past that I am not a big fan when it comes to decorate class with attributes which is the recommended way in ASP.NET MVC. Instead, we will use the fluent configuration (Similar to FluentNHibernate or EF CodeFirst) of MvcExtensions to configure our View Models. For example for the above we will using: public class ProductEditModelConfiguration : ModelMetadataConfiguration<ProductEditModel> { public ProductEditModelConfiguration() { Configure(model => model.Id).Hide(); Configure(model => model.Name).DisplayName(() => LocalizedTexts.Name) .Required(() => LocalizedTexts.NameCannotBeBlank) .MaximumLength(64, () => LocalizedTexts.NameCannotBeMoreThanSixtyFourCharacters); Configure(model => model.Category).DisplayName(() => LocalizedTexts.Category) .Required(() => LocalizedTexts.CategoryMustBeSelected) .AsDropDownList("categories", () => LocalizedTexts.SelectCategory); Configure(model => model.Supplier).DisplayName(() => LocalizedTexts.Supplier) .Required(() => LocalizedTexts.SupplierMustBeSelected) .AsListBox("suppliers"); Configure(model => model.Price).DisplayName(() => LocalizedTexts.Price) .FormatAsCurrency() .Required(() => LocalizedTexts.PriceCannotBeBlank) .Range(10.00m, 1000.00m, () => LocalizedTexts.PriceMustBeBetweenTenToThousand); } } As you can we are using Func<string> to set the localized text, this is just an overload with the regular string method. There are few more methods in the ModelMetadata which accepts this Func<string> where localization can applied like Description, Watermark, ShortDisplayName etc. The LocalizedTexts is just a regular resource, we have both English and German:   Now lets see the view markup: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Demo.Web.ProductEditModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> <%= LocalizedTexts.Create %> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= LocalizedTexts.Create %></h2> <%= Html.ValidationSummary(false, LocalizedTexts.CreateValidationSummary)%> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="<%= LocalizedTexts.Create %>" /> </p> </fieldset> <% } %> <div> <%= Html.ActionLink(LocalizedTexts.BackToList, "Index")%> </div> </asp:Content> As we can see that we are using the same LocalizedTexts for the other parts of the view which is not included in the ModelMetadata like the Page title, button text etc. We are also using EditorForModel instead of EditorFor for individual field and both are supported. One of the added benefit of the fluent syntax based configuration is that we will get full compile type checking for our resource as we are not depending upon the string based resource name like the ASP.NET MVC. You will find the complete localized CRUD example in the MvcExtensions sample folder. That’s it for today.

    Read the article

  • Installing SOA Suite 11.1.1.3

    - by James Taylor
    With the release of Oracle SOA Suite 11.1.1.3 last week (28 April 2010) I thought I would attempt to implement a complete SOA Environment with SOA Suite, BPM and OSB on the WLS infrastructure. One major point of difference with the 11.1.1.3 is that is is released as a point release so you must have 11.1.1.2 installed first, then upgrade to 11.1.1.3. This post is performing the upgrade on Linux, if upgrading on windows you will need to substitute the directories and files accordingly. This post assumes that you have SOA Suite 11.1.1.2 installed already. 1. Download 11.1.1.3 software from the following site: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html WLS 11.1.1.3   RCU 11.1.1.3 SOA Suite 11.1.1.3 OSB 11.1.1.3 Copy files to a staging area. For the purpose of this document the staging area is: /u01/stage  2. Shutdown your existing SOA Suite 11.1.1.2 environment 3. Execute the WLS 11.1.1.3 install from the stage directory. wls1033_linux32.bin 4. Choose the existing 11.1.1.2 Middleware Home 5. Ignore the security update notification 6. Accept the default products to be upgraded. 7. Upgrade of WebLogic has been completed   8. Upgrade the SOA Suite database schemas using the RCU utility. Unzip the RCU utility into the staging area and run the install ./u01/stage/rcuHome/bin/rcu 9. Drop the existing Repository and provide connection details 9. Install SOA Suite patch set 11.1.1.3. Unzip the SOA Suite patchset and execute the runInstaller with the following command. ./u01/stage/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 10. Choose the existing 11.1.1.2 middleware home 11. Start Install 12. Your SOA Suite Install should now be completed. Now we need to update the database repository. Login to SQLPlus as sysdba and execute the following command. SELECT version, status FROM schema_version_registry where owner = 'DEV_SOAINFRA'; the result should be similar to this: VERSION                        STATUS      OWNER ------------------------------ ----------- ------------------------------ 11.1.1.2.0                     VALID       DEV_SOAINFRA As you can see the version if these repositories are still at 11.1.1.2. 13. To upgrade these versions you have 2 options. 1 install via RCU, but this will remove any existing services. The second option is to use the Patch Set Assistant. From the $MW_HOME directory run the following command ./Oracle_SOA1/bin/psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA 14. Install OSB. For the OSB install I did not install the IDE, or the Examples. run the runInstaller from the command line, unzip the OSB download to the stage area. ./u01/stage/osb/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 15. Choose Custom Install NOT to install the IDE (Eclipse) or Examples. 16. Unselect the, Examples and IDE checkboxes. 17. Accept the defaults and start installing. 18. Once the install has been completed configure the domain by running the Configuration Wizard. $MW_HOME/oracle_common/common/bin/config.sh You can create a new domain. In this document I will extend the soa_domain. 19. Select the following from the check list. I have selected the BPM Suite, this is unrelated to OSB but wanted it for my development purposes. To use this functionality additional license are required. 20. Configure the database connectivity. 21. Configure the database connectivity for the OSB schema. 22. Accept the defaults if installing on standard machine, if you require a cluster or advanced configuration then choose the option for you. 23. Upgrade is complete and OSB has been installed. Now you can start your environment.

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Silverlight Firestarter thoughts, and thanks to one and all!

    - by Dave Campbell
    A few metrics that of course got out of hand, but some may find interesting:   1/2 My share of the MVP of the Year award in February of 2009 with Laurent Bugnion 2 Number of degrees I hold: B.S., M.S. Electrical Engineering 3 Number of years in the U.S. Army 3.5 Number of years SilverlighCream has been posted 4 Number of times awarded MVP 6 Number of professional positions I've worked: Antenna Rigger, Boilermaker, Musician, Electronic Technician, Hardware Engineer, Software Engineer 16 Number of companies I've worked for during my career as an Engineer 19 Age at which I turned my first line of code 28 Age at which I hit the workforce as an Engineer 33 Number of years working as an Engineer 43 Number of years writing code 62 Number of years since instantiation 116 Number of tags to search SilverlightCream with 645 Number of blogs I view to find articles (at this moment) 664 Number of articles tagged wp7dev at SilverlightCream right now 700 Number of Twitter followers for WynApse 981 Number of individual bloggers in the SilverlightCream database 1002 Number of SilverlightCream blogposts 1100 Number of people live in Redmond for the Firestarter (I think) 1428 Number of total blogposts at GeeksWithBlogs (not counting this one) 4200 Number of Feedburner subscribers (approximately) 6500 Number of Twitter followers for SilverlightNews (approximately) 7087 Number of posts tagged and aggregated at SilverlightCream right now 13000 Number of people registered to watch the Firestarter online (I think) The overwhelming feeling I have returning from the Silverlight Firestarter: Priceless There is absolutely no way that I could personally thank everyone that over the last few years has held their hand out and offered me a step up to get to the point that Scott Guthrie called me out in his keynote. So I'm just going to hit the highlights here... Scott Guthrie Thanks for not only being the level you are at Microsoft, but for being so approachable, easy to talk to, willing to help everyone, and above all knowledgable. My first level manager at my last position asked if Visual Studio was a graphics program... and you step up to a laptop at a conference and type "File->New Program" ... 'nuff said... oh yeah, thanks for the shoutout! John Papa Thanks for being a good friend, ramroding the Firestarter, being a great guy to be around, and for the poster... holy crap is that cool. Tim Heuer Thanks for all you did as a great DE in Phoenix, and for helping out so many of us, of course being a great guy, and for the poster as well... I think you and John shared that task. In no order at all my buddy Michael Washington, Laurent Bugnion (the other half of the first Silverlight MVP of the Year) Tim Sneath, Mike Harsh, Chad Campbell and Bryant Likes (from back in the day), Adam Kinney, Jesse Liberty, Jeff Paries, Pete Brown, András Velvárt, David Kelly, Michael Palermo, Scott Cate, Erik Mork, and on and on... don't feel bad if your name didn't appear, I have simply too many supporters to name. Silverlight Firestarter Indeed All the people mentioned here, and all the MVPs knew Silverlight was NOT dead, but because of a very unfortunate circumstance, the popular media opinion became that. Consequently the Firestarter exploded from a laid-back event to a global conference. People worked their ass off getting bits ready and presentations using those bits. All to stem the flow of misinformation. All involved please accept my personal thanks for an absolutely awesome job. I had the priviledge of watching the 'prep' on Wednesday afternoon, and was blown away the first time I saw the 3D demo... and have been blown away every time I've seen it since. Not to mention all the other goodness in Silverlight 5. Yes I hit 1000 on my blog, but more importantly, all of you are blogging and using Silverlight, and Microsoft hit one completely out of the park... no... they knocked it out of the neighborhood with the Firestarter. It was amazing to be there for it, and it will be awesome to use the new bits as we get them. Keep reading, there's tons more to come with Silverlight and SilverlightCream following along behind. As usual, this old hacker is humbled to be allowed to play with all the cool kids... Thanks one and all for everything, and Stay in the 'Light

    Read the article

  • How to remove the Mint stuff from my system after installing MATE desktop using MINT repository

    - by tinuz
    I installed the MATE desktop using this manual http://www.unixmen.com/linux-tutorials/linux-distributions/linux-distributions4-ubuntu/1975-install-linuxmint12-extensions-mate-a-mgse-in-ubuntu-1110- but now i can't open my Ubuntu Software Center and can't open the settings from the update manager. I removed mate desktop but it doesn't fix the problem, i also reinstalled the software center, software-properties-gtk and software-properties-common using: sudo apt-get update; sudo apt-get --purge --reinstall install software-center software-properties-common software-properties-gtk. But when using this line i get the following error: Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 3 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/735 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 304824 files and directories currently installed.) Preparing to replace software-center 5.0.2 (using .../software-center_5.0.2_all.deb) ... Unpacking replacement software-center ... Preparing to replace software-properties-common 0.81.13.1 (using .../software-properties-common_0.81.13.1_all.deb) ... Unpacking replacement software-properties-common ... Preparing to replace software-properties-gtk 0.81.13.1 (using .../software-properties-gtk_0.81.13.1_all.deb) ... Unpacking replacement software-properties-gtk ... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Unknown media type in type 'interface/x-winamp-skin' Setting up software-center (5.0.2) ... Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in <module> from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 59, in <module> from softwarecenter.db.database import parse_axi_values_file File "/usr/share/software-center/softwarecenter/db/database.py", line 26, in <module> from softwarecenter.db.application import Application File "/usr/share/software-center/softwarecenter/db/application.py", line 25, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 165, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 148, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named LinuxMint Setting up software-properties-common (0.81.13.1) ... Setting up software-properties-gtk (0.81.13.1) ... $ Is there a way to fix this problem without having to reinstall Ubuntu 11.10?? thanks in advance tinuz

    Read the article

  • Tuxedo 11gR1 Client Server Affinity

    - by todd.little
    One of the major new features in Oracle Tuxedo 11gR1 is the ability to define an affinity between clients and servers. In previous releases of Tuxedo, the only way to ensure that multiple requests from a client went to the same server was to establish a conversation with tpconnect() and then use tpsend() and tprecv(). Although this works it has some drawbacks. First for single-threaded servers, the server is tied up for the entire duration of the conversation and cannot service other clients, an obvious scalability issue. I believe the more significant drawback is that the application programmer has to switch from the simple request/response model provided by tpcall() to the half duplex tpsend() and tprecv() calls used with conversations. Switching between the two typically requires a fair amount of redesign and recoding. The Client Server Affinity feature in Tuxedo 11gR1 allows by way of configuration an application to define affinities that can exist between clients and servers. This is done in the *SERVICES section of the UBBCONFIG file. Using new parameters for services defined in the *SERVICES section, customers can determine when an affinity session is created or deleted, the scope of the affinity, and whether requests can be routed outside the affinity scope. The AFFINITYSCOPE parameter can be MACHINE, GROUP, or SERVER, meaning that while the affinity session is in place, all requests from the client will be routed to the same MACHINE, GROUP, or SERVER. The creation and deletion of affinity is defined by the SESSIONROLE parameter and a service can be defined as either BEGIN, END, or NONE, where BEGIN starts an affinity session, END deletes the affinity session, and NONE does not impact the affinity session. Finally customers can define how strictly they want the affinity scope adhered to using the AFFINITYSTRICT parameter. If set to MANDATORY, all requests made during an affinity session will be routed to a server in the affinity scope. Thus if the affinity scope is SERVER, all subsequent tpcall() requests will be sent to the same server the affinity scope was established with. If the server doesn't offer that service, even though other servers do offer the service, the call will fail with TPNOENT. Setting AFFINITYSTRICT to PRECEDENT tells Tuxedo to try and route the request to a server in the affinity scope, but if that's not possible, then Tuxedo can try to route the request to servers out of scope. All of this begs the question, why? Why have this feature? There many uses for this capability, but the most common is when there is state that is maintained in a server, group of servers, or in a machine and subsequent requests from a client must be routed to where that state is maintained. This might be something as simple as a database cursor maintained by a server on behalf of a client. Alternatively it might be that the server has a connection to an external system and subsequent requests need to go back to the server that has that connection. A more sophisticated case is where a group of servers maintains some sort of cache in shared memory and subsequent requests need to be routed to where the cache is maintained. Although this last case might be able to be handled by data dependent routing, using client server affinity allows the cache to be partitioned dynamically instead of statically.

    Read the article

  • Initial Review - Mastering Unreal Technology I: Introduction to Level Design with Unreal Engine 3

    - by Matt Christian
    Recently I purchased 3 large volumes on using the Unreal 3 Engine to create levels and custom games.  This past weekend I cracked the spine of the first and started reading.  Here are my early impressions (I'm ~250 pages into it, with appendices it's about 900). Pros Interestingly, the book starts with an overview of the Unreal engines leading up to Unreal 3 (including Gears of War) and follows with some discussion on planning a mod and what goes into the game development process.  This is nice for an intro to the book and is much preferred rather than a simple chapter detailing what is on the included CD, how to install and setup UnrealEd, etc...  While the chapter on Unreal history and planning can be considered 'fluff', it's much less 'fluffy' than most books provide. I need to mention one thing here that is pretty crucial in the way I'm going to continue reviewing this book.  Most technical books like this are used as a shelf reference; as a thick volume you use for looking up techniques every now and again.  Even so, I prefer reading from cover to cover, including chapters I may already be knowledgable on (I'm sure this is typical for most people).  If there was a chapter on installing UnrealEd (the previously mentioned 'fluff'), I would probably force myself to read it, even though I've installed the game and engine multiple times on different systems. Chapter 3 is where we really get to the introduction piece of UnrealEd, creating your first basic level.  This large chapter details creating two small rooms, adding static meshes, adding lighting, creating and adding particle emitters, creating a door that animates with Unreal Matinee and Kismet, static meshes with physics, and other little additions to make your level look less beginner.  This really is a chapter that overviews the entire rest of the book, as each chapter following details the creation and intermediate usages of Static Meshes, Materials, Lighting, etc... One other very nice part to this book is the way the tutorials are setup.  Each tutorial builds off the previous and all are step-by-step.  If you haven't completed one yet, you can find all the starting files on the CD that comes with the book. Cons While the description of the overview chapter (Chapter 3) is fresh in your mind, let me start the cons by saying this chapter is setup extremely confusing for the noob.  At one point, you end up creating a door mesh and setting it up as a InteropMesh so that it is ready to be animated, only to switch to particles and spend a good portion of time working on a different piece of the level.  Yes, this is actually how I develop my levels (jumping back and forth), though it's very odd for a book to jump out of sequence. The next item might be a positive or a negative depending on your skill level with UnrealEd.  Most of the introduction to the editor layout is found in one of the Appendices instead of before Chapter 3.  For new readers, this might lead to confusion as Appendix A would typically be read between Chapter 2 and 3.  However, this is a positive for those with some experience in UnrealEd as they don't have to force themselves through a 'learn every editor button' chapter.  I'm listing this in the Cons section as the book is 'Introduction to...' and is probably going to be directed toward a lot of very beginner developers. Finally, there's a lack of general description to a lot of the underlying engine and what each piece in UnrealEd is or does.  Sometimes you'll be performing Tutorial after Tutorial with barely a paragraph in between describing ANY of what you've just done.  Tutorial 1.1 Step 6 says to press Button X, so you do.  But why?  This is in part a problem with the structure of the tutorials rather than the content of the book.  Since the tutorials are so focused on a step-by-step (or procedural) description of a process, you learn the process and not why you're doing that.  For example, you might learn how to size a material to a surface, but will only learn what buttons to press and not what each one does. This becomes extremely apparent in the chapter on Static Meshes as most of the chapter is spent in 3D Studio Max.  Since there are books on 3DSM and modelling, the book really only tells you the steps and says to go grab a book on modelling if you're really interested in 3DSM.  Again, I've learned the process to develop my own meshes in 3DSM, but I don't know the why behind the steps. Conclusion So far the book is very good though I would have a hard time recommending it to a complete beginner.  I would suggest anyone looking at this book (obviously including the other 2, more advanced volumes) to pick up a copy of UDK or Unreal 3 (available online or via download services such as Steam) and watch some online tutorials and play with it first.  You'll find plenty of online videos available that were created by the authors and may suit as a better introduction to the editor.

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • Silverlight Cream for May 13, 2010 -- #861

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Jeff Prosise, DaveDev, Joe Zhou, Chris Eargle, John Papa(-2-, -3-), and David Anson(-2-). Shoutouts: In with the links I've listed below, Sigurd Snørteland also sent a link to this app he's working on which is actually pretty cool to see: ZuneLight. The code is not yet available. He also has a no-code demo of a Silverlight Media Center Pieter Voloshyn, Luiz Thadeu, and Jhun Iti have a very nice Silverlight image editor up: Thumba From SilverlightCream.com: WP7 - Silverlight on mobile Sigurd Snørteland submitted some links for me that have been translated to English from his blog. I hope the pages come out good because he's got a lot of good stuff on there. This one has a link to a presentation he did, and 4 projects you can load up in the emulator that he's converted to the phone: weather, worldclock, coverflow, and solitaire ... pretty cool... thanks for the links Sigurd! Understanding Page Orientation in Silverlight for Windows Phone Jeff Prosise has a really nice post up on page orientation in WP7 ... what it means to your app, how to detect it, and example code for what to do then... also love a quote by Jeff: "Silverlight for Windows Phone is the hottest thing since color TV" Why you should check out Expression Blend Behaviors when using Silverlight DaveDev has a post up describing Behaviors and why we should use them, plus tons of external links to resources, blogs, videos... all good stuff... Fiddler inspector for WCF Silverlight Polling Duplex and WCF RIA Joe Zhou announces and provides a link to a new Fiddler inspector that understands the framing in Polling Duplex and also raw binary xml and binary SOAP. Windows Phone Controls v0.7 Chris Eargle reports the release of Version 0.7 of the Windows Phone Controls project on CodePlex ... this includes a Pivot Control and a Panorama Control... both very nicely done. Binding to Silverlight ComboBox and Using SelectedValue, SelectedValuePath and DisplayMemberPath John Papa responds to a user question and put up a nice post about binding to a ComboBox and then go from the selected item to some other property ... code included No More Boxes! Exploring the PathListBox (Silverlight TV #25) Silverlight TV 25 went up on Tuesday ... thought it was going to be Thursday?? anyway ... John Papa and Adam Kinney are discussing the PathListBox and looking at some cool demos thereof. Exposing SOAP, OData, and JSON Endpoints for RIA Services (Silverlight TV 26) Since today IS Thursday, we have a new Silverlight TV, number 26, and John Papa is chatting with Deepesh Mohnani of the WCF RIA Services team about exposing all sorts of endpoints... should be something in there for everybody :) Workaround for a Silverlight data binding bug affecting various scenarios - including DataGrid+ContextMenu David Anson details the rabbit-trail he and others on the team followed in response to a problem reported via Twitter where the binding on a DataGrid seemed off by a row(!) ... weird but true, validated, and SL3/4 are bug-for-bug compatible with this too! ... But David wouldn't leave us there.. he also has a workaround. Sharing the code for a simple Silverlight 4 REST-based cloud-oriented file management app for Azure and S3 David Anson had an opportunity to build an app he's wanted to build for a while and shares it with us: Blobstore -- a small, lightweight Silverlight 4 application that acts as a basic front-end for the Windows Azure Simple Data Storage and the Amazon Simple Storage Service (S3) -- and remember I said he shared the source :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*)

    - by pinaldave
    Earlier I have published Puzzle Why SELECT * throws an error but SELECT COUNT(*) does not. This question have received many interesting comments. Let us go over few of the answers, which are valid. Before I start the same, let me acknowledge Rob Farley who has not only answered correctly very first but also started interesting conversation in the same thread. The usual question will be what is the right answer. I would like to point to official Microsoft Connect Items which discusses the same. RGarvao https://connect.microsoft.com/SQLServer/feedback/details/671475/select-test-where-exists-select tiberiu utan http://connect.microsoft.com/SQLServer/feedback/details/338532/count-returns-a-value-1 Rob Farley count(*) is about counting rows, not a particular column. It doesn’t even look to see what columns are available, it’ll just count the rows, which in the case of a missing FROM clause, is 1. “select *” is designed to return columns, and therefore barfs if there are none available. Even more odd is this one: select ‘blah’ where exists (select *) You might be surprised at the results… Koushik The engine performs a “Constant scan” for Count(*) where as in the case of “SELECT *” the engine is trying to perform either Index/Cluster/Table scans. amikolaj When you query ‘select * from sometable’, SQL replaces * with the current schema of that table. With out a source for the schema, SQL throws an error. so when you query ‘select count(*)’, you are counting the one row. * is just a constant to SQL here. Check out the execution plan. Like the description states – ‘Scan an internal table of constants.’ You could do ‘select COUNT(‘my name is adam and this is my answer’)’ and get the same answer. Netra Acharya SELECT * Here, * represents all columns from a table. So it always looks for a table (As we know, there should be FROM clause before specifying table name). So, it throws an error whenever this condition is not satisfied. SELECT COUNT(*) Here, COUNT is a Function. So it is not mandetory to provide a table. Check it out this: DECLARE @cnt INT SET @cnt = COUNT(*) SELECT @cnt SET @cnt = COUNT(‘x’) SELECT @cnt Naveen Select 1 / Select ‘*’ will return 1/* as expected. Select Count(1)/Count(*) will return the count of result set of select statement. Count(1)/Count(*) will have one 1/* for each row in the result set of select statement. Select 1 or Select ‘*’ result set will contain only 1 result. so count is 1. Where as “Select *” is a sysntax which expects the table or equauivalent to table (table functions, etc..). It is like compilation error for that query. Ramesh Hi Friends, Count is an aggregate function and it expects the rows (list of records) for a specified single column or whole rows for *. So, when we use ‘select *’ it definitely give and error because ‘*’ is meant to have all the fields but there is not any table and without table it can only raise an error. So, in the case of ‘Select Count(*)’, there will be an error as a record in the count function so you will get the result as ’1'. Try using : Select COUNT(‘RAMESH’) and think there is an error ‘Must specify table to select from.’ in place of ‘RAMESH’ Pinal : If i am wrong then please clarify this. Sachin Nandanwar Any aggregate function expects a constant or a column name as an expression. DO NOT be confused with * in an aggregate function.The aggregate function does not treat it as a column name or a set of column names but a constant value, as * is a key word in SQL. You can replace any value instead of * for the COUNT function.Ex Select COUNT(5) will result as 1. The error resulting from select * is obvious it expects an object where it can extract the result set. I sincerely thank you all for wonderful conversation, I personally enjoyed it and I am sure all of you have the same feeling. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >