Search Results

Search found 18790 results on 752 pages for 'photo blogs'.

Page 513/752 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5)

    - by hinkmond
    So, here's the finished product. I have 8 networked Raspberry Pi devices strategically placed around our Oracle Santa Clara Building 21 office. I attached a JFET transistor based EMF sensor on each device to capture any strange fluctuations in the electromagnetic field (which supposedly, paranormal spirits can change as they pass by). And, I have have a Web app (embedded in this page) which can take the readings and show a graphical display in real-time. As you can see, all the Raspberry Pi devices are blinking away green, indicating they are all operational and all sensors are working correctly. But, I don't see anything... Darn... Maybe, I have to stare at the Web app for a while. I don't know when the "alleged" ghosts in our Oracle Santa Clara office are supposed to be active, but let me know if you see anything... Oh, and by the way, Happy Halloween from the Internet of Spooky Things! See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 5) Hinkmond

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Digital HD Transition

    - by Bill Evjen
    The HD Experience Roughly 53% of the viewing public has HD capable devices in their home 24% think they are watching HD while they have no subscription to any HD content Today’s HD Considerations Choices abound: format resolution – 720p, 1080i/p frame rates compression and wrapping audio compression and delivery metadata packaging, delivery, and usage content delivery protocols Metadata is going to be a part of the overall experience With emerging technologies: Super Hi-Vision (SHV, UHDTV 4320p), 3D HEVC/H.265, WEBM/VP8 HDBaseT, P2PTV Dolby Pulse/HE-AAC Industry standardization Metadata registration, packaging, and delivery standards Improved picture and sound quality is a logical next step but we need to also think about the end to end viewing experience including; 3D video and audio content Mixed-mode viewing to bring interactive and immersive experiences Content Transportability both on-to and off-of the aircraft High Definition Standardization Analog switch off around the world DTV transition completed: 17 countries DTV transition in progress: 45 countries The EU has mandated the end of 2012 as the final date for Analog Switch Off D-Cinema was standardized by SMPTE in 2006 Airlines are installing HD displays today Passengers are bringing their own devices now HD TV on airlines are getting bigger and bigger – bigger than SD was – now up to 23” Gray scale data input for color – 6 to 8 bit Contrast – 400 to 700 Backlit – LED Encryption – can it be the same for HD? PPV in the cabin?

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • Reminder: WebLogic Global, Virtual Developer Day November 5

    - by jeckels
    Just a quick reminder about the FREE virtual developer day focused on WebLogic (and Coherence) coming on November 5th. This day, with content tailored for developers, will guide you through tooling updates and best practices around creating applications with WebLogic and Coherence as target platforms. We'll also explore advances in how you can manage your build, deploy and ongoing management processes to streamline your application's life cycle. And of course, we'll conclude with some hands-on labs that ensure this isn't all a bunch of made-up stuff - get your hands dirty in the code!November 5, 20139am PT/12pm ETREGISTER NOW We're offering two tracks for your attendance, though of course you're free to attend any session you wish. The first will be for pure developers with sessions around developing for WebLogic with HTML5, processing live events with Coherence, and looking at development tooling. The second is for developers who are involved in the building and management processes as part of the application life cycle. These sessions focus on using Maven for builds, using Chef and Puppet for configuration and more.We look forward to seeing you there - don't forget to invite a friend!

    Read the article

  • My First 5K

    - by Chris Williams
    So… yesterday I registered for my first 5K event. It’s in Eden Prairie this weekend. It’s a pretty major milestone for me, especially since I absolutely hate running with a passion. Still, I have to admit I’m rather excited about it. Given that this is my first event, I have no illusions about winning. My immediate goal is simple… don’t come in last. I’ll let you know how it goes.

    Read the article

  • Creating Corporate Windows Phone Applications

    - by Tim Murphy
    Most developers write Windows Phone applications for their own gratification and their own wallets.  While most of the time I would put myself in the same camp, I am also a consultant.  This means that I have corporate clients who want corporate solutions.  I recently got a request for a system rebuild that includes a Windows Phone component.  This brought up the questions of what are the important aspects to consider when building for this situation. Let’s break it down in to the points that are important to a company using a mobile application.  The company want to make sure that their proprietary software is safe from use by unauthorized users.  They also want to make sure that the data is secure on the device. The first point is a challenge.  There is no such thing as true private distribution in the Windows Phone ecosystem at this time.  What is available is the ability to specify you application for targeted distribution.  Even with targeted distribution you can’t ensure that only individuals within your organization will be able to load you application.  Because of this I am taking two additional steps.  The first is to register the phone’s DeviceUniqueId within your system.  Add a system sign-in and that should cover access to your application. The second half of the problem is securing the data on the phone.  This is where the ProtectedData API within the System.Security.Cryptography namespace comes in.  It allows you to encrypt your data before pushing it to isolated storage on the device. With the announcement of Windows Phone 8 coming this fall, many of these points will have different solutions.  Private signing and distribution of applications will be available.  We will also have native access to BitLocker.  When you combine these capabilities enterprise application development for Windows Phone will be much simpler.  Until then work with the above suggestions to develop your enterprise solutions. del.icio.us Tags: Windows Phone 7,Windows Phone,Corporate Deployment,Software Design,Mango,Targeted Applications,ProtectedData API,Windows Phone 8

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • New MOS note regarding Oracle Fusion Middleware certifications

    - by Sadia2
    To get started with the My Oracle Support Certification Tool for newer Oracle Fusion Middleware releases, see Doc ID 1368736.1 . This includes Oracle WebLogic Server 10.3.4+, and many popular certifications for Oracle Fusion Middleware 11.1.1.4 and 11.1.1.5. Beginning with FMW 11.1.1.6 and other FMW 11g R2 (11.1.2) releases (e.g., Forms & Reports, Identity Access Management) there is a concerted effort to load all FMW certifications into the MOS Certification tool.To help you find certification information for older Oracle Fusion Middleware releases, see Doc ID 431578.1 .    

    Read the article

  • Customer Experience Metrics That Matter Most

    - by Charles Knapp
    When customers contact your company, they don't ask to be deflected or handled or converted. They want to be satisfied. To improve the customer experience, you need more than traditional measures such as deflection rates, handling times, and conversion rates. In this new Oracle AppCast podcast, tune in to this conversation with me about customer experience metrics that you can use to grow your business. Would you like to learn more? Please join us at the one of a kind Customer Experience Summit at the Oracle OpenWorld Conference, October 3-5 in San Francisco.

    Read the article

  • The 2012 JAX Innovation Awards

    - by Janice J. Heiss
    A new article, now up on otn/java, titled “The 2012 JAX Innovation Awards” reports on  important Java developments celebrated by the Awards, which were announced in July of 2012. The Awards, given by S&S Media Group, aim to, "Reward those technologies, companies, organizations and individuals that make outstanding contributions to Java." The Awards fall into three categories: Most Innovative Java Technology, Most Innovative Java Company, and Top Java Ambassador. In addition, a finalist who did not win an award receives a Special Jury prize, "in acknowledgement of their unique contribution and positive impact on the Java ecosystem."The winners were: JetBrains for Most Innovative Java Company; Adam Bien as Top Java Ambassador; Restructure 101, created by Headway Software, as Most Innovative Technology; and Charles Nutter, Special Jury award. Each winner received a $2,500 prize. The five finalists in each category were invited to attend the JAX Conference in San Francisco, California. This year's winners each received a $2,500 prize. JetBrains Fellow, Ann Oreshnikova, listed her favorite JetBrains innovations: * Nullability annotations and nullability checker* CamelCase navigation and completion* Continuous Integration in grid (on multiple agents), in TeamCity* IntelliJ Platform and its language support framework* MPS language workbench* Kotlin programming languageWhen asked what currently excites him about Java, Adam Bien, winner of the Java Ambassador Award, expressed enthusiasm over the increasing interest of smaller companies and startups for Java EE. “This is a very good sign,” he said. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.”Special Jury Prize Winner, Charles Nutter of Red Hat, remarked that, “JRuby seems to have hit a tipping point this past year, moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. Check out the complete article here.

    Read the article

  • Performing a clean database build with MSBuild part 2

    - by Robert May
    In part 1, I showed a complicated mechanism for performing a clean database build. There’s an easier way.  The easier way is to use the msbuild extension tasks out on codeplex.  While you’ll still need to forcibly take the database offline (ALTER DATABASE [mydb] SET OFFLINE WITH ROLLBACK IMMEDIATE), the other msbuild tasks more easily allow you to create and delete the database.  Eventually, I’ll post an example. Technorati Tags: MSBuild

    Read the article

  • Being rocked...

    - by ZacHarlan
    After almost four and half years, I finally escaped from the world of telemarketing.     I'm now at a place that writes really good code, values testing, does routine code reviews, collaborates with each other so continuously and effectively somebody should make a documentary about it!   Today alone, I had two really smart and well respected developers go line by line through my code and show me how to make it better.  Seriously, people pay really good money for something like this and they don't get near the quality of feedback as I got!     +1 for me finally getting to a point in my career where i get to work with some of the best of the best in the software world!   I've been rocked by the fact that places like this actually exist.     I've been Rocked by the sheer size, complexity and simplicity of our website.     Most importantly I've been ROCKED by the fact that this many smart people check their egos at the door, gel together and look for ways to make software better than how they found it.  This is how to grow a business with tech... hire great people and watch them go!   Seriously, bravo.

    Read the article

  • Java Magazine: Java at Sea!

    - by Tori Wieldt
    The September/October issue of Java Magazine is now out, with several great Java stories, including: Java At Sea? Liquid Robotics charts a new course with expert help from Java pioneer James Gosling.?  ?Duke’s Choice AwardsMeet this year’s winners! (The awards will be presented at the JavaOne Sunday night reception at the Taylor Street Cafe.)Looking Ahead to Project LambdaJava Language Architect Brian Goetz on the importance of lambda expressions.JCP Q&A: Ben EvansThe London JUG representative talks about the JCP and the Java community.Java EE Connector Architecture 1.6Adam Bien on deep integration with connector services in a lean way.DataFX: Populate JavaFX Controls with Real-World DataTools to retrieve, parse, and render data in a variety of JavaFX controls. Fix ThisStephen Chin challenges your JavaFX skills. Java Magazine is a bi-monthly online publication. It includes technical articles on the Java language and platform; Java innovations and innovators; JUG and JCP news; Java events; links to online Java communities; and videos and multimedia demos. Subscriptions are free.

    Read the article

  • Configuration Tips for better Performance with ADF Mobile Apps

    - by SRINI INDLA
    Some tips to keep in mind to make sure ADF Mobile application's performance is optimal: 1. Select release mode in deployment profile. This is perhaps the most important thing to remember to ensure best performance for ADF Mobile Apps. Selecting this option causes the deployer to package optimized JVM and minified JS libs with the mobile app there by significantly improving the over all performance of the application. 2. For iOS you do not need to do anything else other than selecting  release mode in deploy profile. However, on Android you have to create a keystore and configure it in JDev --> Tools --> Preferences --> ADF Mobile --> Platforms : Android as shown in the snapshot below 3. Steps for generating the Keystore for Android using keytool :  4. Logging level setting in logging.properties: Make sure the log level is set to SEVERE for both framework logger as well as the application logger as follows oracle.adfmf.framework.level=SEVERE oracle.adfmf.application.level=SEVERE 5. When using SOAP WebServices with WebService Data Control make sure you select the option to copy the WSDL. This will cause the JDev to download the WSDL and all the XSDs referenced by the WSDL from the server at design time and package them with the application during deployment. This way the application does not incur the cost of downloading these resources at run time from the device.

    Read the article

  • The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values

    - by user702295
    Do you know what these values are telling you? COUNT(*) PREDICTION_STATUS DO_FORE DO_AGGRI AGGRI_98 AGGRI_99 LEVEL_ID 19854 99 1 1 1 1 3 1077 99 0 1 1 1 0 262691 99 1 1 -1 56 99 0 1 1 1 2 1 98 1 1 1 1 1 99 0 1 1 1 748796 1 1 1 4 351633 1 1 1 1 1 2 1877829 97 1 1 4 840 99 1 1 1 1 27 99 0 1 1 1 3 1 97 1 1 -1 66712 99 1 1 1 1 2 53213 1 1 1 1 1 3 2560 98 1 1 4   Check out The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values (Doc ID 1509754.1) This customer is adding an additional processing burden, adding no value.  The incoming data should be scrubbed to eliminate the overhead. 

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • invitation: EMEA Hardware: Quarterly Partner Sales Update Roadshow

    - by mseika
    Dear Partner We are pleased to invite you to attend the first Oracle EMEA Hardware Quarterly Partner Sales Update Roadshow running in 10 different cities across EMEA. The 3 hour sales session will run in the afternoon in various locations. You can directly register under the "Register Now" button. Learn to Articulate the Oracle Hardware Business value proposition to your customers. Explain Oracle Hardware positioning versus the competition. Understand Oracle Hardware as best platform to run the complete Oracle-on-Oracle stack from Application to Disk Locations & Timings Date Country Location Timings 2nd July 2013   France  Paris 13.00 - 16.15 PM 2nd July 2013  Saudi Arabia  Riyadh 13.00 - 16.15 PM 4th July 2013  United Arab Emirates  Dubai 13.00 - 16.15 PM 8th July 2013  South Africa  Johannesburg 13.00 - 16.15 PM 9th July 2013  Germany  Frankfurt 14.00 - 17.15 PM 10th July 2013  Germany  Münich 14.00 - 17.15 PM 11th July 2013  Switzerland  Zürich 14.00 - 17.15 PM 15th July 2013  United Kingdom  Reading 13.00 - 16.15 PM 17th July 2013  Spain  Madrid 14.00 - 17.15 PM 18th July 2013  Italy  Milan 13.00 - 16.15 PM Price: FREE Find your location and book your seat here! We hope you will take maximum advantage of these great learning and networking opportunities and look forward to welcoming you to your nearest event! Best regards, Giuseppe FacchettiPartner Business Development Manager,Servers, Oracle EMEA Sasan MoaveniStorage Partner Sales Manager,Oracle EMEA

    Read the article

  • invitation: EMEA Hardware: Quarterly Partner Sales Update Roadshow

    - by mseika
    Dear Partner We are pleased to invite you to attend the first Oracle EMEA Hardware Quarterly Partner Sales Update Roadshow running in 10 different cities across EMEA. The 3 hour sales session will run in the afternoon in various locations. You can directly register under the "Register Now" button. Learn to Articulate the Oracle Hardware Business value proposition to your customers. Explain Oracle Hardware positioning versus the competition. Understand Oracle Hardware as best platform to run the complete Oracle-on-Oracle stack from Application to Disk Locations & Timings Date Country Location Timings 2nd July 2013   France  Paris 13.00 - 16.15 PM 2nd July 2013  Saudi Arabia  Riyadh 13.00 - 16.15 PM 4th July 2013  United Arab Emirates  Dubai 13.00 - 16.15 PM 8th July 2013  South Africa  Johannesburg 13.00 - 16.15 PM 9th July 2013  Germany  Frankfurt 14.00 - 17.15 PM 10th July 2013  Germany  Münich 14.00 - 17.15 PM 11th July 2013  Switzerland  Zürich 14.00 - 17.15 PM 15th July 2013  United Kingdom  Reading 13.00 - 16.15 PM 17th July 2013  Spain  Madrid 14.00 - 17.15 PM 18th July 2013  Italy  Milan 13.00 - 16.15 PM Price: FREE Find your location and book your seat here! We hope you will take maximum advantage of these great learning and networking opportunities and look forward to welcoming you to your nearest event! Best regards, Giuseppe FacchettiPartner Business Development Manager,Servers, Oracle EMEA Sasan MoaveniStorage Partner Sales Manager,Oracle EMEA

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • JCP 2012 Award Nominations Announced

    - by heathervc
      The 10th Annual JCP Program Award Nominations have been posted on JCP.org.  The community gets together every year during JavaOne to congratulate the winners and nominees at the JCP Community Party held in San Francisco. This year there are three awards: JCP Member/Participant of the Year, Outstanding Spec Lead, and Most Significant JSR. Member of the Year: Stephen Colebourne Markus Eisele Google JUG Chennai Werner Keil London Java Community and SouJava Antoine Sabot-Durand Outstanding Spec Lead Michael Ernst, JSR 308, Annotations on Java Types Victor Grazi, Credit Suisse, JSR 354, Money and Currency API Nigel Deakin, Oracle, JSR 343, Java Message Service 2.0 Pete Muir, Red Hat, JSR 346, Contexts and Dependency Injection for Java EE 1.1 Most Significant JSR API for JSON Processing, JSR 353 Money and Currency API, JSR  354 Java State Management, JSR 350 Java Message Service 2, JSR 343 JCP.Next, JSR 348, JSR 355, and JSR 358 Congratulations to the nominees; you can read the nomination text and more information about the awards here.  And remember to join us on Tuesday, 2 October at the Infusion Lounge to celebrate with the winners and nominees!

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • BizTalk 2009 - Messages: Last 50 suspended

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received  and the Last 100 Messages Sent queries so what about suspended messages? In BizTalk 2004 we had a query in HAT to return the last 100 suspended message instances.  Lets create a direct replacement in a BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last fifty messages that were suspended by BizTalk: Coming up Service instances - Last 100

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >