Search Results

Search found 18160 results on 727 pages for 'jdk 7 feature complete'.

Page 90/727 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • A bunch of goodness is being let out the door

    - by Enrique Lima
    Soma was the keynote speaker at TechEd Middle East in Dubai. And during this, he announced a couple of items being let out the door to now being releases. First, Visual Studio 2010 SP1.  Available for subscribers on March 8th, and the rest of the world on March 10th. TFS-Project Integration Feature Pack, Visual Studio Load Test Feature pack and Lightswitch Beta 2 are also part of the releases out and available. Check Soma’s blog … http://blogs.msdn.com/b/somasegar/archive/2011/03/07/visual-studio-2010-enhancements.aspx

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

  • How to Use Offline Files in Windows to Cache Your Networked Files Offline

    - by Taylor Gibb
    The problem with storing all your files on a file server or networked machine is that when you leave the network, how are you going to access your files? Instead of using a VPN or Dropbox, you can use the Offline Files feature built into Windows. Note: You should probably not be using this guide to make your 2 terabyte movie collection available offline—while it may work, it is not recommended just because the Offline Files feature isn’t made for storing massive amounts of data offline. How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers?

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • The Work Order Printing Challenge

    - by celine.beck
    One of the biggest concerns we've heard from maintenance practitioners is the ability to print and batch print work order details along with its accompanying attachments. Indeed, maintenance workers traditionally rely on work order packets to complete their job. A standard work order packet can include a variety of information like equipment documentation, operating instructions, checklists, end-of-task feedback forms and the likes. Now, the problem is that most Asset Lifecycle Management applications do not provide a simple and efficient solution for process printing with document attachments. Work order forms can be easily printed but attachments are usually left out of the printing process. This sounds like a minor problem, but when you are processing high volume of work orders on a regular basis, this inconvenience can result in important inefficiencies. In order to print work order and its related attachments, maintenance personnel need to print the work order details and then go back to the work order and open each individual attachment using the proper authoring application to view and print each document. The printed output is collated into a work order packet. The AutoVue Document Print Service products that were just released in April 2010 aim at helping organizations address the work order printing challenge. Customers and partners can leverage the AutoVue Document Print Services to build a complete printing solution that complements their existing print server solution with AutoVue's document- and platform-agnostic document print services. The idea is to leverage AutoVue's printing services to invoke printing either programmatically or manually directly from within the work order management application, and efficiently process the printing of complete work order packets, including all types of attachments, from office files to more advanced engineering documents like 2D CAD drawings. Oracle partners like MIPRO Consulting, specialists in PeopleSoft implementations, have already expressed interest in the AutoVue Document Print Service products for their ability to offer print services to the PeopleSoft ALM suite, so that customers are able to print packages of documents for maintenance personnel. For more information on the subject, please consult MIPRO Consulting's article entitled Unsung Value: Primavera and AutoVue Integration into PeopleSoft posted on their blog. The blog post entitled Introducing AutoVue Document Print Service provides additional information on how the solution works. We would also love to hear what your thoughts are on the topic, so please do not hesitate to post your comments/feedback on our blog. Related Articles: Introducing AutoVue Document Print Service Print Any Document Type with AutoVue Document Print Services

    Read the article

  • Should we use RSS or Atom for feed generation?

    - by Henrik Söderlund
    For various reasons we are required to add feeds to our product. The main reason is to be able to say to potential buyers that "yes, we have feeds". We do not actually expect the feature to be used that much. Ideally we would like to provide both RSS and Atom feeds. However, at the moment we are severely pressed for time and are forced to select just one of these. Should we use Atom or RSS? Feature-wise we are fine with either, so I am only looking for information about the popularity and support for the various formats. Are there many feed readers out there without Atom support?

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • Silverlight Cream for May 25, 2010 - 2 -- #870

    - by Dave Campbell
    In this Issue: Kirupa, Matthias Shapiro(-2-, -3-), Giorgetti Alessandro, Kunal Chowdhury, Mike Snow, and Jason Zander. Shoutout: This looks like a really nice WP7 app done by a team of folks for Imagine Cup 2010: Ahead ... I hope to see some blog posts and code on this! From SilverlightCream.com, and remember you can send me a link to your post or submit at SilverlightCream.com: Control Storyboards Easily using Behaviors Kirupa is following through on a promise to discuss the Behaviors that come on-board Blend. He's starting with two to help deal with Storyboards: ControlStoryboardAction and the StoryboardCompletedTrigger. PHP, MySQL and Silverlight: The Complete Tutorial (Part 1) Matthias Shapiro has a 3-parter up on PHP, MySQL, and Silverlight -- wondered how I missed this first one until I realized they all posted in 2 days... this first post sets up the MySQL database to be used. PHP, MySQL, and Silverlight: The Complete Tutorial (Part 2) In part 2, Matthias Shapiro writes a PHP web service that grabs the data from the database and sends it in JSON format to the Silverlight app (see part 3). PHP, MySQL, and Silverlight: The Complete Tutorial (Part 3) Matthias Shapiro's part 3 is the Silverlight part that reads the JSON produced by the PHP webservice from Part 2, to provide display and edits of the data... and this whole series includes source. Silverlight: adding an IsEditing property to the DataForm Giorgetti Alessandro laments the lack of an IsEditing property in the DataForm, then goes on to demonstrate his path to a suitable workaround. Step-by-Step Command Binding in Silverlight 4 Kunal Chowdhury has a nicely-detailed post on Command Binding in Silverlight 4 and builds up a demo MVVM app in the process... source project included. Silverlight Tip of the Day #24 – Resolving Unknown Objects in VS I'm not sure I would call Mike Snow's latest Silverlight Tip 'Silverlight' ... but if you don't know it, you need to. Sample: Windows Phone 7 Example Application with Landscape Layout Whoa... check out the WP7 app Jason Zander did with landscape mode defined... you're going to want to refer back to this one... Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast: Technical Preview of EBS 12.2 Online Patching (Presentation) Kevin Hudson, Senior Director and one of the Online Patching architects, discussed one of the cornerstone new features in our upcoming Oracle E-Business Suite 12.2 release. This ground-breaking feature is based upon Edition-Based Redefinition, a new 11gR2 Database feature that was built to Oracle Applications division specifications to allow the E-Business Suite's database tier to be patched while the environment is running.  Online Patching combines the use of Edition-Based Redefinition and new E-Business Suite technologies to allow patching to the E-Business Suite's database and application tier servers while the environment is being actively used by its end-users. (June 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Byte Size Tips: How to Disable the Useless Dashboard on Mac OS X

    - by The Geek
    After getting my new MacBook Air with the awesome battery life, I decided to give OS X a spin for a while to see how I liked it. About 34 seconds later, I encountered my first irritation: The stupid Dashboard feature is just completely useless. Here’s how to disable it. Note: overall, Mac OS X is a really great operating system. It’s just this one feature that makes no sense. Simply Remove Dashboard from Spaces     

    Read the article

  • Why AdMob&#8217;s reported iPhone and Android market shares are inflated

    AdMob, the mobile advertiser that was bought by Google some months ago, has released its latest market share figures for the mobile browsers.Their main findings have already been discussed extensively: Smartphones are on the rise; 48% versus 35% last month. Feature phones are falling quickly; 58% to 35%. Still, the absolute number of feature phones rose by 31%, which means that the market as a whole is growing rapidly.The AdMob report, however, is not about browser market share but about ad impressions....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SATA errors reported during boot: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0

    - by digby280
    I have noticed some error during the Linux boot. They seem to continue to occur after the boot adding lines to the log every few seconds. Once booted this normally does not appear to be causing any problems. However, around 1 in 10 boots results in a kernel panic and the computer has on two or three occasions suddenly rebooted after being powered on for a number of hours. I presume the cause of the reboot is a kernel panic as well. I am running Ubuntu 11.10 and I have had Ubuntu installed on the computer for around a year. I have googled around and not found anything useful. I have provided the kernel log lines and the output of smartctl. Can anyone explain exactly what these errors mean, or better still how to resolve them? Apr 2 16:51:27 dell580 kernel: [ 19.831140] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:27 dell580 kernel: [ 19.934194] tg3 0000:03:00.0: eth0: Link is down Apr 2 16:51:28 dell580 kernel: [ 20.929468] tg3 0000:03:00.0: eth0: Link is up at 100 Mbps, full duplex Apr 2 16:51:28 dell580 kernel: [ 20.929471] tg3 0000:03:00.0: eth0: Flow control is on for TX and on for RX Apr 2 16:51:28 dell580 kernel: [ 20.929727] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 2 16:51:29 dell580 kernel: [ 21.609381] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:29 dell580 kernel: [ 21.616515] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:29 dell580 kernel: [ 21.616519] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:29 dell580 kernel: [ 21.616525] ata2.00: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 21.934036] ata2.01: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 22.408890] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.408907] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.440934] ata2.00: configured for UDMA/100 Apr 2 16:51:29 dell580 kernel: [ 22.449040] ata2.01: configured for UDMA/133 Apr 2 16:51:29 dell580 kernel: [ 22.449818] ata2: EH complete Apr 2 16:51:33 dell580 kernel: [ 26.122664] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:33 dell580 kernel: [ 26.122670] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:33 dell580 kernel: [ 26.122677] ata2.00: hard resetting link Apr 2 16:51:33 dell580 kernel: [ 26.442684] ata2.01: hard resetting link Apr 2 16:51:34 dell580 kernel: [ 26.925545] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.925561] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.961542] ata2.00: configured for UDMA/100 Apr 2 16:51:34 dell580 kernel: [ 26.969616] ata2.01: configured for UDMA/133 Apr 2 16:51:34 dell580 kernel: [ 26.970400] ata2: EH complete Apr 2 16:51:35 dell580 kernel: [ 28.111180] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:35 dell580 kernel: [ 28.111184] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:35 dell580 kernel: [ 28.111191] ata2.00: hard resetting link Apr 2 16:51:35 dell580 kernel: [ 28.429674] ata2.01: hard resetting link Apr 2 16:51:36 dell580 kernel: [ 28.904557] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.904572] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.936609] ata2.00: configured for UDMA/100 Apr 2 16:51:36 dell580 kernel: [ 28.944692] ata2.01: configured for UDMA/133 Apr 2 16:51:36 dell580 kernel: [ 28.945464] ata2: EH complete Apr 2 16:51:38 dell580 kernel: [ 31.581756] eth0: no IPv6 routers present Apr 2 16:51:38 dell580 kernel: [ 32.103066] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:38 dell580 kernel: [ 32.103074] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:38 dell580 kernel: [ 32.103085] ata2.00: hard resetting link Apr 2 16:51:38 dell580 kernel: [ 32.419669] ata2.01: hard resetting link Apr 2 16:51:39 dell580 kernel: [ 32.894518] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.894533] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.926536] ata2.00: configured for UDMA/100 Apr 2 16:51:39 dell580 kernel: [ 32.934715] ata2.01: configured for UDMA/133 Apr 2 16:51:39 dell580 kernel: [ 32.935578] ata2: EH complete Here's the output of smartctl for the drive. smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-17-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F1 DT Device Model: SAMSUNG HD103UJ Serial Number: S13PJ90QC19706 LU WWN Device Id: 5 0000f0 00b1c7960 Firmware Version: 1AA01113 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Mon Apr 2 17:13:48 2012 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 41) The self-test routine was interrupted by the host with a hard or soft reset. Total time to complete Offline data collection: (11772) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 197) minutes. Conveyance self-test routine recommended polling time: ( 21) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 076 076 011 Pre-fail Always - 7940 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 521 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 642 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 482 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 759 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 069 000 Old_age Always - 27 (Min/Max 16/27) 194 Temperature_Celsius 0x0022 073 067 000 Old_age Always - 27 (Min/Max 16/28) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 320028 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1494 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 211 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 211 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 0f 31 63 8f e1 Error: ICRC, ABRT 15 sectors at LBA = 0x018f6331 = 26174257 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 40 62 8f e1 08 00:01:00.460 READ DMA c8 00 20 00 7c 30 e0 08 00:01:00.450 READ DMA c8 00 00 10 49 8f e1 08 00:01:00.440 READ DMA c8 00 e0 20 d0 30 e0 08 00:01:00.420 READ DMA c8 00 00 c0 59 90 e1 08 00:01:00.400 READ DMA Error 210 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 cf e9 cf 66 e0 Error: ICRC, ABRT 207 sectors at LBA = 0x0066cfe9 = 6737897 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 b8 cf 66 e0 08 00:08:29.780 READ DMA c8 00 60 60 c9 18 e0 08 00:08:29.770 READ DMA c8 00 40 20 c9 18 e0 08 00:08:29.770 READ DMA c8 00 20 00 c9 18 e0 08 00:08:29.760 READ DMA c8 00 20 98 cf 66 e0 08 00:08:29.750 READ DMA Error 209 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 2f d1 74 e0 e0 Error: ICRC, ABRT 47 sectors at LBA = 0x00e074d1 = 14709969 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 00 74 e0 e0 08 00:00:30.940 READ DMA c8 00 20 18 36 de e0 08 00:00:30.930 READ DMA c8 00 08 48 f1 dd e0 08 00:00:30.930 READ DMA c8 00 08 a8 f0 dd e0 08 00:00:30.930 READ DMA c8 00 08 90 f0 dd e0 08 00:00:30.930 READ DMA Error 208 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 7f 21 88 9d e0 Error: ICRC, ABRT 127 sectors at LBA = 0x009d8821 = 10324001 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 a0 00 88 9d e0 08 00:00:27.610 READ DMA c8 00 58 a8 e7 9c e0 08 00:00:27.610 READ DMA c8 00 00 28 e6 9c e0 08 00:00:27.610 READ DMA c8 00 00 e0 e4 9c e0 08 00:00:27.610 READ DMA c8 00 00 90 e0 9c e0 08 00:00:27.600 READ DMA Error 207 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 26 6a 6a c3 e0 Error: ABRT at LBA = 0x00c36a6a = 12806762 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ca 00 00 90 69 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 68 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 50 65 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 d0 64 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 63 c3 e0 08 00:29:39.350 WRITE DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Interrupted (host reset) 90% 638 - # 2 Short offline Interrupted (host reset) 90% 638 - # 3 Extended offline Interrupted (host reset) 90% 638 - # 4 Short offline Interrupted (host reset) 90% 638 - # 5 Extended offline Interrupted (host reset) 90% 638 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • Mass emailing bouncebacks- Sendblaster

    - by Matt
    I am currently using a mass emailer called sendblaster- if anyone has experience using this program for mass emails any help would be fantastic. The program has a feature that allows you to track reads and opens of emails sent, however the problem i have is with delivery failures/bouncebacks. The "manage bouncebacks" feature is very confusing, and appears to be incapable of showing which email addresses have bounced. For some reason the sender address does not receive delivery failures as with other mass email programs that I've used. If anyone knows a way to efficiently manage the delivery failures/bounceback using this program please help! Thanks

    Read the article

  • Switching from Visual Studio to Eclipse [closed]

    - by Jouke van der Maas
    I've been using Visual Studio for about 6 years now, which is enough time to know most useful keyboard shortcuts and little features. I recently had to switch to Eclipse and java for school, and now I'm constantly searching for the right keys to press. I have searched around for a definitve guide on this, but I couldn't find any. Here's what I want to know: For any feature in Visual Studio, what is the equivalent feature in Eclipse called and what is it's default keyboard shortcut? Are there any things that work very differently in Eclipse, that one might misunderstand or do wrong at first when switching? Are there features in Visual Studio that Eclipse does not have, and is there a workaround? I hope we can create a guide to make life easier for future developers that have to make this switch. You can answer any of the three questions above (no need to do all three), and multiple per answer if you want. I can't mark questions as community wiki anymore, but I do think that's appropriate here.

    Read the article

  • Looking for Windows shared web hosting with PHP support

    - by Ladislav Mrnka
    I'm looking for Windows based shared web hosting which supports multiple hosted web sites (multiple domains). Supported technologies should contain: ASP.NET 4, ASP.NET MVC IIS 7 MS SQL 2008 PHP, MySQL It is for my hobby projects so it should not be too expensive. I tried GoDaddy's Windows Deluxe hosting but the experience is very bad and I want to move elsewhere. WordPress hosted on GoDaddy's Windows hosting is unloaded every few minutes and next request takes around 20s to complete. Following request to empty site takes around 3s to complete. Even request for RSS wich transfers 1.2KB takes several seconds. The delay happens in PHP processing because static content is served within 200ms. It helped to migrate to Linux hosting (all requests are served under 1s) but Linux hosting is not what I'm looking for.

    Read the article

  • SQL SERVER – Download FREE PDFs from SQLAuthority.com

    - by Pinal Dave
    Throughout the last seven years, we have created many PDF downloads from SQLAuthority.com and many are very much appreciated by users. I just wanted to list all the downloads which we have created so far in a single place, hence here is the blog post which contains all the PDF downloads which we have created so far. SQL Server Interview Questions and Answers Download Beginning Big Data with NuoDB SQL Server Management Studio Keyboard Shorts Download SQL Server 2008 Certification Path Complete Download SQL Server Cheat Sheet Download SQL Server Database Coding Standards and Guidelines Complete List Download SQL Server Indexing Checklist Let me know which one of the PDF you like the most and if you expect us to create any more PDF articles. Leave a comment. Additionally, we have created various script bank for all the script which has been used on SQLAuthority.com so far. You can get access to the scripts by clicking on following link. SQLAuthority.com Scripts Download Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • &ldquo;Our Users are Doing Something Surprising&rdquo;&hellip; but what?

    - by antonio romero
    I’ve just started a discussion on the OWB Linkedin Group based on a blog post from Laura Klein’s “Users Know” blog, entitled “Your Users are Doing Something Surprising”… As a PM I found the post thought-provoking and a good reminder to learn from our customers: ...You may have written user stories and work flows... But you know who didn’t read your user stories? That’s right: your users. The result? Somewhere out there, a whole lot of your users are doing something totally unexpected with your product.... Your customers want to do something with your product so badly that they’re going out of their way to come up with clever ways to do it on their own. There are three excellent reasons for you to know what your customers are actually doing with your product: So you know if you are missing an opportunity to pivot your product or marketing So you know if you are missing an important feature So you don’t accidentally destroy a commonly used workaround or "unplanned feature" Truer words were rarely blogged. In fact just in the last few weeks I have had several "users" (some customers, and some internal to Oracle, in fact) turn up having built unexpected but powerful things around OWB, because it has such extensibility mechanisms built into it: OMB*Plus, the old Java APIs back before 10.2, and now the code template/knowledge module framework OWB shares with ODI. Some of our external users show astounding knowledge of how to make OWB really sing. (We hope to feature case studies from several of them over the course of the year on the OWB blog.) My question to all of you: can you identify things you have done or are doing with OWB or that you depend on in it that you think would come as a surprise to us? This could be either some development so advanced as to leave us all gob-smacked, or just some common (to you) thing that you use it for that you find enormously valuable but that you think is a bit off the theoretical "main line" use case of loading data warehouses. I invite the readers of this blog to come visit the OWB and ODI LinkedIn group and share their unusual applications of OWB or the very ordinary-looking features that you don’t want us to forget or would like us to extend. Your anecdotes will impress the crowd and will also help shape future data integration products from Oracle... Come on, surprise us. :)

    Read the article

  • top tweets WebLogic Partner Community – November 2012

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity Please feel free to send us your news! Andrejus Baranovskis ADF BC View Accessor To Centralize Business Logic Processing http://fb.me/ZdH3reTC OracleBlogs? Devoxx Coming Up! http://ow.ly/2t855p OTNArchBeat Webcast: #JMX with #Oracle #WebLogic Server 12c - featuring @FrankMunz Nov 13 10am PT 1pm ET http://pub.vitrue.com/ulyl OracleSupport_ Detailed nomenclature of #weblogic logging services http://pub.vitrue.com/LwLK WebLogic Community Java Management Extensions with Oracle WebLogic Server 12c&ndash;Webcast Nocember 13th 2012 http://wp.me/p1LMIb-oH Andrejus Baranovskis? Difference Between Initialized and New Mode in ADF BC http://fb.me/1d00veJLm Oracle Technet? Ondrej Brejla shares information on the release of NetbBeans IDE 7.3 Beta 2. http://pub.vitrue.com/Q0Ji OracleBlogs? Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald http://ow.ly/2t6m7y Markus Eisele #NetBeans 7.3 Beta2 is Out! https://blogs.oracle.com/netbeansphp/entry/netbeans_7_3_beta2_is … WebLogic Community Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald http://wp.me/p1LMIb-oj Frank Munz? Also next week, Tue, 10am PST: @Oracle devcast about WLS 12c JMX ecosystem 4 DevOps. Join now: http://goo.gl/oikWX Oracle WebLogic #EclipseLink #JPA deployed on #webLogic using #Eclipse #WTP very detailed tutorial http://pub.vitrue.com/tckQ Middleware Magic Middleware Magic Completes 2 year of spreading its Magic http://goo.gl/fb/8vdA4 #Weblogic #J2EE #news Adam Bien? Interview In The "Java Spotlight Episode 107" Podcast: I had a nice chat during the JavaOne 2012 conference in ... http://bit.ly/VBLiij OracleSupport_WLS? #WebLogic 12c example code projects with a focus on #Java EE 6 http://pub.vitrue.com/Og8C JDeveloper & ADF? ADF Insider: Angels in the ADF Architecture http://dlvr.it/2RYBjq Andreas Koop [blog post] ADF: Smart Input Date Client Converter: EnvironmentTested with JDeveloper / ADF 11.1.2.3(Should also... http://bit.ly/SIValJ Steven Davelaar Added 16 new ADF samples from @andrejusb http://java.net/projects/smuenchadf/pages/ADFSamplesAuthorABA1 … JDeveloper & ADF? Transaction Level ADF BC Entity Validation http://dlvr.it/2QWN7K Oracle Exalogic? Do you know the secret to Exalogic's speed? It's called Exabus. More at the OTN Garage - http://youtu.be/dreH2XmplyA OracleSupport_WLS New tutorial: configure and administrate #clusters http://pub.vitrue.com/Gduy JDeveloper & ADF? Workaround for an Xcode/iOS SDK Issue http://dlvr.it/2QTRlJ Masoud Kalali? #GlassFish trunk will switch to require JDK 7 to build, details at GlassFish #JDK 7 Switch FAQ: https://wikis.oracle.com/display/GlassFish/JDK+7+Switch+FAQ … ADF Code Corner? ADF Oracle Magazine Article "Master and Commander" about global command pattern strategy for regions with ctx events http://bit.ly/PLvxUL Maciej Gruszka? @wlscommunity Cloud Application Foundation webcast about OOW announcements soon avail for replay Adam Bien? Real World Java EE Patterns Book ("Green Edition") is available for lending. For unlimited time and free: http://www.amazon.com/gp/feature.html/?ie=UTF8&camp=1789&creative=390957&docId=1000739811&linkCode=ur2&tag=wwwadambienco-20 … WebLogic Community Slides for todays #WebLogicCommunity are uploaded to the workspace. Not yet a member http://www.oracle.com/partners/goto/wls-emea … #weblogic Adam Bien? My (unprepared) night hacking starts at 11 AM CET: http://nighthacking.com WebLogic Community We will start our ExaLogic webcast in 5 minutes http://weblogiccommunity.wordpress.com/2012/10/31/join-us-for-our-weblogic-communtiy-webcast-on-november-2nd-2012-oow-update-weblogic-exalogic/ … Gertjan van het Hof? WebLogic Communtiy webcast on November 2nd 2012 11:00 CET! OOW update WebLogic & ExaLogic « WebLogic Community http://weblogiccommunity.wordpress.com/2012/10/31/join-us-for-our-weblogic-communtiy-webcast-on-november-2nd-2012-oow-update-weblogic-exalogic/ … GlassFish? Java EE 7 scheduled posted http://java.net/projects/javaee-spec/pages/Home … slated for final release on 4/29/2013 OracleSupport_WLS? Updating #EclipseLink in #WebLogic http://pub.vitrue.com/j2wc WebLogic Community Join us for our WebLogicCommunity Webcast tomorrow November 2nd. Ge tan update an all OOW announcements http://weblogiccommunity.wordpress.com/2012/10/31/join-us-for-our-weblogic-communtiy-webcast-on-november-2nd-2012-oow-update-weblogic-exalogic/ … #wlscommunity OTNArchBeat? Oracle ADF Mobile - Login Functionality | @AndrejusB http://pub.vitrue.com/Wqqk WebLogic Community? OpenWorld General Session 2012: Middleware & JavaOne http://wp.me/p1LMIb-oe OracleSupport_WLS? How to use RDA to generate #Weblogic thread dumps at specified Intervals? http://pub.vitrue.com/auuP OracleBlogs? Join us for our WebLogic Communtiy webcast on November 2nd 2012! OOW update WebLogic & ExaLogic http://ow.ly/2sXAel OracleSupport_WLS? Monitoring #Spring in #WebLogic - #Middleware magic blog post http://pub.vitrue.com/OcSq ultan? Oracle Launches Mobile Applications User Experience Design Patterns https://blogs.oracle.com/userassistance/entry/oracle_launches_mobile_applications_user … @odtug @adf_emg @tapadoo #xcake #android WebLogic Community? Managing EclipseLink using JMX http://wp.me/p1LMIb-oh WebLogic Community? WebLogic Partner Community Newsletter October 2012 http://wp.me/p1LMIb-n5 Simon Haslam? #ukoug Oracle Scene mag: "Getting to Know Oracle Fusion Middleware" into by @wlscommunity & myself http://viewer.zmags.com/publication/81b2adef#/81b2adef/30 … Andrejus Baranovskis LOV Validation and Programmatic Row Insert Performance http://fb.me/167ehvEBL Andrejus Baranovskis? ADF Project Development Time Distribution http://fb.me/zMijgiKF Edwin Biemond? Using JSON-REST in ADF Mobile: In the current version of ADF Mobile the ADF DataControls ( URL and WS ) only sup... http://bit.ly/Rdr9IX WebLogic Community Oracle Enterprise Manager Cloud Control 12c: Best Practices for Middleware Management http://wp.me/p1LMIb-mA WebLogic Community? Tuxedo 12c http://wp.me/p1LMIb-my Lucas Jellema? Online and free: ADF Advanced eCourses from Oracle - http://download.oracle.com/tutorials/jtcd3/ecourse_adf_part1/html/temp_frameset/index.htm … and http://download.oracle.com/tutorials/jtcd3/ecourse_adf_part2/html/temp_frameset/index.htm … Lucas Jellema? Finally Luc can tell all his stories on ADF Mobile - he is Mr ADF Mobile after all. On the AMIS Blog: http://technology.amis.nl/2012/10/22/adf-mobile-is-now-generally-available/ … with more coming! Gerkmann-Bartels [blog] ADF Mobile Samples are still there... http://maybe-interesting.blogspot.de/ Markus Eisele Do you know the #Oracle #Parcel #Service? A #weblogic #JavaEE6 example app on #github! http://bit.ly/XNVnqS by @jeffreyawest ! Contribute! WebLogic Community? Distribute the WebLogic Community newsletter October editoin - read it! or register for #wlscommunity http://www.oracle.com/partners/goto/wls-emea … #opn #oracle OracleBlogs? Getting Started with ADF Mobile Sample Apps http://ow.ly/2sOJOi Pieter Kranenburg? Oracle Forms Modernization? Checkout: http://forms.qafe.com for retainment of investment, knowledge and being future proof #OracleForms Markus Eisele [blog] Review: "Java EE 6 Cookbook for Securing, Tuning, and Extending Enterprise... http://dlvr.it/2MWGCq #packtpub #javaee #review Gertjan van het Hof ADF Mobile HTML5 is available. https://blogs.oracle.com/fusionmiddleware/ … Adam Bien? My (Adam Bien) JavaOne Session Videos and Resources: CON3896 - Interactive Onstage Java EE Overengineering, Mond... http://bit.ly/XNpSNm Torsten Winterberg? #ADF Mobile is GA now on OTN: http://www.oracle.com/technetwork/developer-tools/adf/overview/adf-mobile-096323.html … Finally! Oracle WebLogic? New Blog Post: Instructions on how to configure a WebLogic Cluster and use it with Oracle Http Server http://ow.ly/2sOdPJ luc bors? #Oracle #ADF Mobile is production Download the extension here http://bit.ly/TChziZ WebLogic Community? Move Data into the Grid for Scalable, Predictable Response Times http://wp.me/p1LMIb-mw Andrejus Baranovskis? Why Oracle ADF Developers are Sensitive People http://fb.me/209osORtC Lucas Jellema? Article by Edwin Biemond on the AMIS blog on Configuring FMW Servers using Puppet - http://technology.amis.nl/2012/10/13/configure-fmw-servers-with-puppet/ … - integration of WebLogic in Puppet Oracle UsableApps Must Read: New Oracle Applications UX White Paper: Research and Design Process: http://www.oracle.com/webfolder/ux/applications/Fusion/whitePapers.html … @oracle #usableapps Sten Vesterli? You know ADF Security is missing from the free ADF Essentials? Check out a solution by @andrejusb: http://andrejusb.blogspot.com/2012/10/adf-essentials-security-implementation.html … Oracle WebLogic Monitoring #Spring in #WebLogic - #Middleware magic blog post http://pub.vitrue.com/uT69 WebLogic Community Java Cloud Service for developers http://wp.me/p1LMIb-mu Gerkmann-Bartels #MUST read 4 #WLS Admins: How to Analyze Java Thread WebLogic Community? top tweets WebLogic Partner Community &ndash; October 2012 http://wp.me/p1LMIb-ob Andrejus Baranovskis? ADF Mobile - Login Functionality http://fb.me/2gxwZV9jc WebLogic Community? “@MaciejGruszka: Another #WebLogic bootcamp for #Oracle partners. Right now - Copenhagen Denmark” THANKs trainings at https://blogs.oracle.com/emeapartnerweblogic/ … Dumps http://zite.to/RKyx2x OracleBlogs? top tweets WebLogic Partner Community October 2012 http://ow.ly/2sXuAn eclipsecon? Today is the Call for Papers early bird deadline. Submit a session now! http://eclipsecon.org/2013/early-talk-selection … WebLogic Community? Join us for our WebLogic Communtiy webcast on November 2nd 2012! OOW update WebLogic & ExaLogic http://wp.me/p1LMIb-oA WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >