Search Results

Search found 2571 results on 103 pages for 'extend'.

Page 44/103 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Virtualization in Solaris 11 Express

    - by lynn.rohrer(at)oracle.com
    In Oracle Solaris 10 we introduced Oracle Solaris Containers -- lightweight virtual application environments that allow you to consolidate your Oracle Solaris applications onto a single Oracle Solaris server and make the most of your system resources.The majority of our customers are now using Oracle Solaris Containers on their enterprise systems for applications ranging from web servers to Oracle Database installations. We can also make these Containers highly available with Oracle Solaris Cluster, the industry's first virtualization-aware enterprise cluster product. Using Oracle Solaris Cluster you can failover applications in a Container to another Container on a single system or across systems for additional availability.We've added significant features in Oracle Solaris 11 Express to improve and extend the Oracle Solaris Zone model:Integration of Zones with our new Solaris 11 packaging system (aka Image Packaging System) to provide easy software updates within a zoneSupport for Oracle Solaris 10 Zones to run your Solaris 10 applications unaltered on an Oracle Solaris 11 Express systemIntegration with the new Oracle Solaris 11 network stack architecture (more on this in a future blog post)Improved observability with the zonestat management interface and commandsDelegated administration rights for owners of individual non-global zonesTight integration with Oracle Solaris ZFS to allow dedicated datasets per zoneWith ZFS as the default file system we can now provide easy to manage Boot Environments for zonesThis quick summary is just to whet your appetite to learn more about Oracle Solaris 11 Express Zones enhancements. Fortunately we can serve a full meal at the Oracle Solaris 11 Express Technology Spotlight on Virtualization page on the Oracle Technical Network.

    Read the article

  • Welcome to the Weblog on Oracle ADF Mobile!

    - by joe.huang
    Welcome to ADF Mobile team's weblog.  My name is Joe Huang - I am the product manager for ADF Mobile.  Oracle ADF Mobile is a part of Oracle's Application Development Framework (ADF) that support the development of enterprise/business applications that run on mobile devices.  The development tool for this framework is of course Oracle JDeveloper.  As some of you may know, we currently support the development of mobile browser-based application - this part of product is called ADF Mobile Browser.  Additionally, we are close to release a technology preview of ADF Mobile Client, which supports development of on-device, disconnect capable mobile applications.  What's truly unique about ADF Mobile development process is that it's a very visual and declarative experience, while still allow power Java developers to completely extend the framework to their liking.  The framework also provides a rich set of services needed by an enterprise-grade mobile application - these services would literally take years to implement if they are to be built from the ground up.  However, by using JDeveloper and ADF Mobile, you get the entire framework at your service!In the coming entries, the ADF Mobile product development team will publish any news, best practices, our observation on mobile technology trends, or just our experiences in playing with "gadgets".  Be sure to check back on this page!Sincerely,Joe HuangOracle

    Read the article

  • eSeminar: Oracle’s Fusion Update for Partners

    - by Richard Lefebvre
    Oracle’s Fusion Update for PartnersThursday, November 17th  - 6pm CET At OOW, Oracle unveiled Oracle Fusion Applications, the next generation of business applications. By setting the standard for application architecture, design and deployment, customers will be able to extend the value of their applications environment by using Oracle Fusion Applications components side-by-side with their existing applications portfolio. Delivered as a complete suite of modular applications, Oracle Fusion Applications coexist with existing Oracle Applications. As one module, a product family or the entire suite, customers can choose to leverage the advances pioneered by Oracle at a pace that matches business needs for a new level of performance. David Bowin, Director of Oracle’s Fusion Applications Team, will host a eSeminar sessions to address various questions that our partners have regarding Oracle’s Fusion Applications.   See the schedule below and mark your calendar to attend. 9:00am - 10:00am Pacific (6pm CET) Click this link to add the event to your calendar: http://oukc.oracle.com/static11/opn/ics/98300.icsDial-In:  1. 877-664-9137  /   Passcode 98300International:  706-634-9619  http://www.intercall.com/national/oracleuniversity/gdnam.html Access Live Event Learning Link:  http://oukc.oracle.com/static09/opn/login/?t=livewebcast|c=1069641479 Webconference access-- http://ouweb.webex.comSession number: 591807958 

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • Windows Azure Virtual Machine Test Drive Kit

    - by Clint Edmonson
    The public preview of hosted Virtual Machines in Windows Azure is now available to the general public. This platform preview enables you to evaluate our new IaaS and Enterprise Networking capabilities. Once you have registered for the 90 Day Free Trial and created a new account, you can access the preview directly at this link: https://account.windowsazure.com/PreviewFeatures If you’ve been to any of my presentations lately, you’ll know that I’m fired up about these new offerings. As I’ve worked through some scenarios for myself and with my customers, I’ve been collecting the resources that helped me to ramp up. Here’s a collection of links to the items I’ve found most useful: Core Resources Digital Chalk Talk Videos – detailed technical overviews of the new Windows Azure services and supporting technologies as announced June 7, including Virtual Machines (IaaS Windows and Linux), Storage, Command Line Tools http://www.meetwindowsazure.com/DigitalChalkTalks Scenarios Videos on You Tube – “how to” guides, including “Create and Manage Virtual Networks”, “Create & Manage SQL Database”, and many more http://www.youtube.com/user/windowsazure Windows Azure Trust Center - provides a comprehensive of view of Windows Azure and security and compliance practices http://www.windowsazure.com/en-us/support/trust-center/ MSDN Forums for Windows Azure http://www.windowsazure.com/en-us/support/preview-support/ Microsoft Knowledge Base article Microsoft server software support for Windows Azure Virtual Machines Videos Deep Dive into Running Virtual Machines on Windows Azure Windows Azure Virtual Machines and Virtual Networks Windows Azure IaaS and How It Works Deep Dive into Windows Azure Virtual Machines: From the Cloud Vendor and Enterprise Perspective An Overview of Managing Applications, Services, and Virtual Machines in Windows Azure Monitoring and Managing Your Windows Azure Applications and Services Overview of Windows Azure Networking Features Hybrid Will Rule: Options to Connect, Extend and Integrate Applications in Your Data Center and Windows Azure Business Continuity in the Windows Azure Cloud Linux on Windows Azure Blogs Understanding Windows Azure Virtual Machines An Overview of Windows Azure Virtual Network Virtual Machines and Windows Running SQL Server in a Windows Azure Virtual Machine Support for Linux Virtual Machines on Windows Azure

    Read the article

  • Is my sequence diagram correct?

    - by Dummy Derp
    NOTE: I am self studying UML so I have nobody to verify my diagrams and hence I am posting here, so please bear with me. This is the problem I got from some PDF available on Google that simply had the following problem statement: Problem Statement: A library contains books and journals. The task is to develop a computer system for borrowing books. In order to borrow a book the borrower must be a member of the library. There is a limit on the number of books that can be borrowed by each member of the library. The library may have several copies of a given book. It is possible to reserve a book. Some books are for short term loans only. Other books may be borrowed for 3 weeks. Users can extend the loans. Draw a use case diagram for a library. I already drew the Use Case diagram and had it checked by a community member. This time I drew sequence diagrams for borrowing a book and extending the date of return. Please let me know if they are correct. I drew them using Visual Paradigm and I dont know how to keep a control of the sequence numbers. If you do, please let me know :) Diagrams

    Read the article

  • Technical development decision for my newly established software company

    - by test test
    I have a new software company where I am planning to develop CRM system. So I have settled down on the technological approach I am going to use:- I will use an open source Java-based CRM engine. I will use a third party reporting tool named JasperReports for providing reports capabilities for the CRM. I will develop the interface and any customization which the customer might ask for using asp.net mvc framework since my knowledge and experience are based on asp.net. And I will use the CRM API to integrate my asp.net web application with the Java-based CRM. I have developed a simple demo which integrate these three main components (CRM engine, asp.net application and the reporting tool) and they worked well. But I am afraid of the following risk that I might face if I go with the above approach: I should hire developers with different skills and experience: Developers with Java skills to be able to modify the Java-based CRM and writing plug-ins -when needed- to extend the CRM capabilities. Other developers with asp.net skills to be able to build the application such as application forms, the portal from where users will be able to start the CRM processes, searching capabilities, etc. So might the above point raise some risks when I start hiring a new team and start building the CRM application, OR I am on the right track at this early stage?

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • Focus on Identity Management at Oracle OpenWorld12

    - by Tanu Sood
    Heading to Oracle OpenWorld 2012? Then we have Identity Management and relevant sessions all mapped out for you to help you navigate Oracle OpenWorld. Do make use of Focus On Identity Management document online or if you’d like to have a copy handy, use the pdf version instead. In the meantime, here are the 3 must-attend Identity Management sessions for this year: Trends in Identity Management Monday, October 1, at 10:45 a.m., Moscone West L3, room 3003, (session ID# CON9405) Led by Amit Jasuja, this session focuses on how the latest release of Oracle Identity Management addresses emerging identity management requirements for mobile, social, and cloud computing. It also explores how existing Oracle Identity Management customers are simplifying implementations and reducing total cost of ownership. Mobile Access Management Tuesday, October 2, at 10:15 a.m., Moscone West L3, room 3022, (session ID# CON9437) There are now more than 5 billion mobile devices on the planet, including an increasing number of personal devices being used to access corporate data and applications. This session focuses on ways to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. Evolving Identity Management Thursday, October 4, at 12:45 p.m., Moscone West L3, room 3008, (session ID# CON9640) Identity management requirements have evolved and are continuing to evolve as organizations seek to secure cloud and mobile access. This session explores emerging requirements and shares best practices for evolving your identity management implementation, including the value of a service-oriented, platform approach. For a complete listing of all identity management sessions, hands-on labs, and more, see Focus on Identity Management now. See you at OOW12. 

    Read the article

  • OEPE with ADF binding support available: Total Eclipse

    - by Frank Nimphius
    The current release of Oracle Enterprise Pack for Eclipse, though in technology preview, brings Oracle ADF binding to the Eclipse IDE. You can download the Software from the link below: Oracle Enterprise Pack for Eclipse (12.1.1.1.0) Technical Preview New June 2012 Certified on Windows 7/XP/Vista, MacOS, and Linux. Supported on JDK 6. For many Eclipse users, ADF is new and therefore I expect them to need guidance and help in case they run into issues they don't know how to recover from. Similar, ADF users familiar with Oracle JDeveloper that want to give OEPE a try, will find things different in Eclipse and thus may have questions.  For both audiences I suggest to post issues to the OEPE forum on the Oracle Technology Network: I'll extend my OTN monitoring to include the OEPE forum on a daily basis to learn about developer needs, requirements and - of course - to catch bugs that need to be filed. From my side this is a part-time involvement, which means that the more ADF questions show on the forum, the more help I could need in answering them. The OTN forum for JDeveloper in my opnion wouldn't be the right place to go to unless the question is a generic ADF question that is not dependent on the integration in Eclipse. Here's the OEPE forum link for a start https://forums.oracle.com/forums/forum.jspa?forumID=578 Frank

    Read the article

  • Get a A Little Smarter . . .

    - by Michelle Kimihira
    Author, Rimi Bewtra, Senior Director, Product Marketing, Oracle Fusion Middleware   This month I had a chance to gain some valuable insights on Oracle’s latest product innovations and customer successes after my conversation with Vice President of Product Management of Oracle Fusion Middleware, Amit Zavery.  In this 10 minute podcast, Amit was able to quickly outline a few of Oracle recent major announcements including: ·         Oracle Exalogic Elastic Cloud – our flagship engineered system for running business applications – provides extreme performance, reliability and scalability while delivering lower total cost of ownership, reduced risk, higher user productivity and one-stop support. ·         Oracle Application Development Framework (ADF) Mobile, is a HTML5 and Java-based framework that enables developers to easily build, deploy, and extend enterprise hybrid mobile applications across multiple mobile operating systems, including iOS and Android, from a single code base. And did you know Oracle has 125,000 Fusion Middleware customers? Amit shared a few of his favorite customer success stories and gave me latest view from the leading Industry Analysts. If you have 10 minutes, you too can get a little smarter … take a listen and let’s catch up soon. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Stack Trace Logger [migrated]

    - by Chris Okyen
    I need to write a parent Java class that classes using recursion can extend. The parent class will be be able to realize whenever the call stack changes ( you enter a method, temporarily leave it to go to another method call, or you are are finsihed with the method ) and then print it out. I want it to print on the console, but clear the console as well every time so it shows the stack horizantaly so you can see the height of each stack to see what popped off and what popped on... Also print out if a baseline was reached for recursive functions. First. How can I using the StackTraceElements and Thread classes to detect automatically whenever the stack has popped or pushed an element on without calling it manually? Second, how would I do the clearing thing? For instance , if I had the code: public class recursion(int i) { private static void recursion(int i) { if( i < 10) System.out.println('A'); else { recursion(i / 10 ); System.out.println('B'); } } public static void main(String[] argv) { recursion(102); } } It would need to print out the stack when entering main(), when entering recursion(102) from main(), when it enters recursion(102 / 10), which is recursion(10), from recursion(102), when it enters recursion(10 / 10), which is recursion(1) from recursion(10). Print out a message out when it reaches the baseline recursion(1).. then print out the stacks of reversed revisitation of function recursion(10), recursion(102) and main(). finally print out we are exiting main().

    Read the article

  • How do I implement the bg, &, and bg commands functionaliity in my custom unix shell program written in C

    - by user1631009
    I am trying to extend the functionality of my custom unix shell which I earlier wrote as part of my lab assignment. It currently supports all commands through execvp calls, in-built commands like pwd, cd, history, echo and export, and also redirection and pipes. Now I wanted to add the support for running a command in background e.g. $ls -la& Now I also want to implement bg and fg job control commands. I know this can be achieved if I execute the command by forking a new child process and not waiting for it in the parent process. But how do I again bring this command to foreground using fg? I have the idea of entering each background command in a list assigning each of them a serial number. But I don't know how do I make the processes execute in the background, then bring them back to foreground. I guess wait() and waitpid() system calls would come handy but I am not that comfortable with them. I tried reading the man pages but still am in the dark. Can someone please explain in a layman's language how to achieve this in UNIX system programming? And does it have something to do with SIGCONT and SIGSTP signals?

    Read the article

  • Annotate source code with diagrams as comments

    - by Steven Lu
    I write a lot of (primarily c++ and javascript) code that touches upon computational geometry and graphics and those kinds of topics, so I have found that visual diagrams have been an indispensable part of the process of solving problems. I have determined just now that "oh, wouldn't it just be fantastic if I could somehow attach a hand-drawn diagram to a piece of code as a comment", and this would allow me to come back to something I worked on, days, weeks, months earlier and far more quickly re-grok my algorithms. As a visual learner, I feel like this has the potential to improve my productivity with almost every type of programming because simple diagrams can help with understanding and reasoning about any type of non-trivial data structure. Graphs for example. During graph theory class at university I had only ever been able to truly comprehend the graph relationships that I could actually draw diagrammatical representations of. So... No IDE to my knowledge lets you save a picture as a comment to code. My thinking was that I or someone else could come up with some reasonably easy-to-use tool that can convert an image into a base64 binary string which I can then insert into my code. If the conversion/insertion process can be streamlined enough it would allow a far better connection between the diagram and the actual code, so I no longer need to chronographically search through my notebooks. Even more awesome: plugins for the IDEs to automatically parse out and display the image. There is absolutely nothing difficult about this from a theoretical point of view. My guess is that it would take some extra time for me to actually figure out how to extend my favorite IDEs and maintain these plugins, so I'd be totally happy with a sort of code post-processor which would do the same parsing out and rendering of the images and show them side by side with the code, inside of a browser or something. Since I'm a javascript programmer by trade. What do people think? Would anyone pay for this? I would.

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Update: Commercially Supported GlassFish VersionsAquarium blogger David Delabassee shares background information and links to where you can download the recently released GlassFish Server Bundle Patch 3.1.2.8. Read the article. Announcing WebLogic on Oracle Database Appliance 2.7Oracle WebLogic Server on Oracle Database Appliance 2.7 offers a complete solution for building and deploying enterprise Java EE applications in a fully integrated system of software, servers, storage, and networking that delivers highly available database and WebLogic services. Learn more. APAC Partner iDay: What's New in Oracle WebLogic, 8-Apr 12 noon SG/2pm AEDT/9:30 IST - Invite your Partners - Register Virtual Developer Conference:  Creating a Foundation for Cloud Applications using Oracle WebLogic and Oracle Coherence - OnDemand Webcast: WebLogic Configuration using Chef and Puppet - On-Demand Podcast Series: Part 3 - Oracle WebLogic Server and Oracle Database Integration - Podcast Coherence*Web: Sharing an httpSession Among Applications in Different Oracle WebLogic Clusters SOA solution architect Jordi Villena shows how easy it is to extend Coherence*Web to enable session sharing. Read the article. Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. Read the article. Video: Coherence Community on Java.net - 4 Projects available under CDDL-1.0 Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how you can actively participate in open source Coherence Community projects. Watch the video. Working with Oracle Security Token Service in an Architecture Involving Oracle WebLogic Server and Oracle Service Bus Oracle Fusion Middleware specialist Ronaldo Fernandes takes you step by step through the process of creating a single sign-on between Oracle WebLogic and Oracle Service Bus using Oracle Security Token Service (OSTS) to generate SAML tokens. Read the article. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,

    Read the article

  • ubuntu one syncronizing problems

    - by user72249
    I am a user of Ubuntu and Ubuntu One. First: I had a serious problem with ubuntu one. I have a windows and several Ubuntu machine and they all sync one account. My first problem is that i can't really delete files. Whenever I delete something Ubu One resync it, so it appears again in my folder. Furthermore, when i move something to another dir it synconize the new location and resync the old one, so i got doubled my files. I can't reorganize my files. So i tried to delete the duplicated files through the web dash, but i cant reorder and select multiple files by extension. For example i wanted to move all PDF to another location... Second: I can't configure in the ubuntu one app the loacation of the main One folder. For example i want my One folder to be on another partition than my HOME folder. It is a main problem in linux and windows also. I tried to move that folder in windows with the hardlink method. So i uninstalled the U1, then i created a folder link to another drive, than installed the U1. THEN i lost all my files in my U1. Lucky thing that i have a backup, but i thought U1 is a stable solution for me, and i planned to extend my space, but these bugs are major problems! It's worth to pay for it if it's working perfectly. I think it's more important than releasing a new version in every month.

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • Oracle OpenWorld Countdown Begins

    - by Michelle Kimihira
    Oracle OpenWorld is a little over 3 weeks away and it is bigger than ever!  We are very excited to meet with you and share our exciting innovations around Oracle Fusion Middleware. To help you navigate, there will be a series of blogs to help you make the most out of the event. Thomas Kurian, Executive Vice President, Product Development will be delivering his keynote, “The Oracle Cloud: Oracle’s Cloud Platform and Applications Strategy” on Tuesday, October 2 at 8:00 AM – 9:45 AM in Moscone North, Hall D. Be sure to attend this session and gain insight on how Oracle’s complete suite of cloud applications are transforming how customers manage their businesses. Here are the top 5 Oracle Fusion Middleware General Sessions you don’t want to miss: Monday, 10/1 10:45 AM – 11:45 AM GEN9504 - General Session: Innovation Platform for Oracle Apps, Including Fusion Applications Amit Zavery, Vice President, Fusion Middleware Product Management Moscone West, 3002/3004 Monday, 10/1 1:45PM – 2:45 PM GEN11554 – General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West, 3002/3004 Monday, 10/1 4:45 PM – 5:45 PM GEN11422 – General Session: Building and Managing a Private Oracle Java and Middleware Cloud Moscone West, 3014 Tuesday, 10/2 10:15 AM – 11:15 AM GEN9394 - General Session: Oracle Fusion Middleware Strategies Driving Business Innovation Hassan Rizvi, Executive Vice President of Product Development Moscone North, Hall D Tuesday, 10/2 11:45 AM – 12:45AM CON9162 – Oracle Fusion Middleware: Meet This Year’s Most Impressive Customer Projects Moscone West, 3001 Here is what else you can expect to see on the Oracle Fusion Middleware Blog leading up to Oracle OpenWorld 2012. §  Week of 10-14 September: Best of Oracle Fusion Middleware and Oracle Fusion Middleware for Enterprise Applications §  Week of 17-21 September: What to expect in Hassan Rizvi’s (Executive Vice President of Product Development) and Amit Zavery’s (Vice President of Product Management) sessions §  Week of 24-28 September: All Things Mobile and Fusion Middleware Lineup

    Read the article

  • Why are SW engineering interviews disproportionately difficult?

    - by stackoverflowuser2010
    First, some background on me. I have a PhD in CS and have had jobs both as a software engineer and as an R&D research scientist, both at Very Large Corporations You Know Very Well. I recently changed jobs and interviewed for both types of jobs (as I have done in the past). My observation: SW engineer job interviews are way, way disproportionately more difficult than CS researcher job interviews, but the researcher job is higher paying, more competitive, more rewarding, more interesting, and has a higher upside. Here's a typical interview loop for researcher: Phone interview to see if my research is in alignment with the lab's researcher In-person, give presentation on my recent research for one hour (which represents maybe 9 month's worth of work), answer questions In-person one-on-one interviews with about 5 researchers, where they ask me very reasonable questions on my work/publications/patents, including: technical questions, where my work fits into related work, and how I can extend my work to new areas Here's a typical interview loop for SW engineer: Phone interview where I'm asked algorithm questions and maybe do some coding. Pretty standard. In-person interviews at the whiteboard where they drill the F*** out of you on esoteric C++ minutia (e.g. how does a polymorphic virtual function call work), algorithms (make all-pairs-shortest-path algorithm work for 1B vertices), system design (design a database load balancer), etc. This goes on for six or seven interviews. Ridiculous. Why would anyone be willing to put up with this? What is the point of asking about C++ trivia or writing code to prove yourself? Why not make the SE interview more like the researcher interview where you give a talk about what you've done? How are technical job interviews for other fields, like physics, chemistry, civil engineering, mechanical engineering?

    Read the article

  • 5 Ways Microsoft Can Improve the Windows 8 Start Screen

    - by Matt Klein
    After having used Windows 8 over the past few months, we’ve found a few ways Microsoft could immediately improve the Start Screen to make it less disorienting and more usable, not only for tablets but desktops and laptops as well. It’s safe to say that the one thing Windows 8 doesn’t lack is criticism. Since the Consumer Preview debuted in February, it has proven to be one of the most polarizing Windows releases ever. But regardless of whether you love or hate it, Windows 8 is where Microsoft’s venerable operating system is headed. Portable computing is here to stay and if the company is to survive, let alone remain relevant, it has to change, adapt, embrace, and extend. Perhaps the single most universally controversial change to Windows is Microsoft’s decision to remove the Start button (or orb, if you’ve moved beyond XP) and with it, what we know to be the Start Menu. In their place we now have a Start hot corner (a workable alternative) and the newly redesigned Metro Start Screen. The Start Screen is, if nothing else, different. Beyond a doubt, there has not been such a radical redesign of Windows’ Start functionality since it went to a two-column design with a nested “All Programs” menu in Windows XP. The Start Screen can be a little jarring because it requires users to not only relearn what they’ve known for nearly two decades but to also rethink the way they interact with Windows. However, the Start Screen maintains its core elements: a Start “menu”, a place for all installed programs (All apps), and a search pane. The Start Screen is attractive, clean, bold, and very imperfect. Here are five changes we’d like to see in the Start Screen before Windows 8 goes gold … How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Prevent Eclipse Java Builder from Compiling Java-Like Source

    - by redjamjar
    I'm in the process of writing an eclipse plugin for my programming language Whiley (see http://whiley.org). The plugin is working reasonably well, although there's lots to do. Two pieces of the jigsaw are: I've created a "Whiley Builder" by subclassing incremental project builder. This handles building and cleaning of "*.whiley" files. I've created a content-type called "Whiley Source Files" for "*.whiley" files, which extends "org.eclipse.jdt.core.javaSource" (this follows Andrew Eisenberg suggestion). The advantage of having the content-type extend javaSource is that it immediately fits into the package explorer, etc. In principle, I could fleshout ICompilationUnit to provide more useful info, although I haven't done that yet. The disadvantage is that the Java builder is trying to compile my whiley files ... and it obviously can't. Originally, I had the Java Builder run first, then the Whiley builder. Superficially, this actually worked out quite well since all of the errors from the Java Builder were discarded by the Whiley Builder (for whiley files). However, I actually want the Whiley Builder to run first, as this is the best way for me to resolve dependencies between Java and Whiley files. Which leads me to my question: can I stop the Java builder from trying to compile certain java-like resources? Specifically, in my case, those with the "*.whiley" extension. As an alternative, I was wondering whether my Whiley Builder could somehow update the resource delta to remove those files which it has dealt with. Thoughts?

    Read the article

  • How should I implement a command processing application?

    - by Nini Michaels
    I want to make a simple, proof-of-concept application (REPL) that takes a number and then processes commands on that number. Example: I start with 1. Then I write "add 2", it gives me 3. Then I write "multiply 7", it gives me 21. Then I want to know if it is prime, so I write "is prime" (on the current number - 21), it gives me false. "is odd" would give me true. And so on. Now, for a simple application with few commands, even a simple switch would do for processing the commands. But if I want extensibility, how would I need to implement the functionality? Do I use the command pattern? Do I build a simple parser/interpreter for the language? What if I want more complex commands, like "multiply 5 until >200" ? What would be an easy way to extend it (add new commands) without recompiling? Edit: to clarify a few things, my end goal would not be to make something similar to WolframAlpha, but rather a list (of numbers) processor. But I want to start slowly at first (on single numbers). I'm having in mind something similar to the way one would use Haskell to process lists, but a very simple version. I'm wondering if something like the command pattern (or equivalent) would suffice, or if I have to make a new mini-language and a parser for it to achieve my goals?

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • NoVa Code Camp 2010.1 &ndash; Don&rsquo;t Miss It!

    - by John Blumenauer
    Tomorrow, June 12th will be the NoVa Code Camp 2010.1 held at the Microsoft Technical Center in Reston, VA.  What’s in store?  Lots of great topics by some truly knowledgeable speakers from the mid-Atlantic region.  This event will have four talks alone on Azure, plus sessions ASP.NET MVC2, SharePoint, WP7, Silverlight, MEF, WCF and some great presentations centered around best practices and design. The schedule can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Schedule/tabid/202/Default.aspx The session descriptions and speaker list is at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sessions/tabid/197/Default.aspx We’re also fortunate this year to have several excellent sponsors.  The sponsor list can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sponsors/tabid/198/Default.aspx.  As a result of the excellent sponsors, attendees will be enjoying nice food throughout the day and the end of day raffle will have some great surprises regarding swag! I’ll be presenting MEF with an introduction and then how it can be used to extend Silverlight applications.  If you’re new to MEF and/or Silverlight, don’t worry.  I’ll be easing into the concepts so everyone will leave an understanding of MEF by the end of the session.   Don’t miss NoVa Code Camp 2010.1.  See YOU there!

    Read the article

  • Project collision shapes to plane for 2.5D collision detection

    - by Jkh2
    I am working on a top down 2.5D game. In the game anything that overlaps on the screen should be 'colliding' with each other regardless of whether they are on the same plane in the 3D world. This is illustrated below from a side-ways view: The orange and green circles are spheres floating in the 3D world. They are projected onto a plane parallel to the viewport plane (y = 0 in the image) and if they overlap there is a collision event between them. These spheres are attached to other meshes to represent the sphere bounding boxes for collisions. The way I plan to implement this at the moment is the following: Get the 3D world position at the center of the sphere. Use Camera.WorldToViewportPoint to project the point to the viewport plane. Move a Sphere Collider with the radius of the sphere to that point. Test for collisions using unity colliders. My question is how to extend this to work for rotated cuboids. For instance if I have two rotated cuboids, if I follow the logic above it would not work as intended as the cuboids may not collide but they could still be intersected on the view plane. An example is below: Is there a way to project a cuboid that would be aligned with the plane? Would it be a valid cuboid for all rotations if I did this?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >