Search Results

Search found 13869 results on 555 pages for 'release 18'.

Page 201/555 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • 5 Steps to getting started with IronRuby

    - by Eric Nelson
    IronRuby is a Open Source implementation of the Ruby programming language for .NET, heavily relying on Microsoft's Dynamic Language Runtime. The project's #1 goal is to be a true Ruby implementation, meaning it runs existing Ruby code. Check out this summary of using the Ruby standard library and 3rd party libraries in IronRuby. IronRuby has tight integration with .NET, so any .NET types can be used from IronRuby and the IronRuby runtime can be embedded into any .NET application. These 5 steps should get you nicely up and running on IronRuby – OR … you could just watch a video session from the lead developer which took place earlier this month (March 2010 - 60mins). But the 5 steps will be quicker :-) Step 1 – Install IronRuby :-) You can install IronRuby automatically using an MSI or manually. For simplicity I would recommend the MSI install. TIP: As of the 25th of March IronRuby has not quite shipped. The download above is a Release Candidate (RC) which means it is still undergoing final testing by the team. You will need to uninstall this version (RC3) once the final release is available. The good news is that uninstalling IronRuby RC3 will work without a hitch as the MSI does relatively little. Step 2 – Install an IronRuby friendly editor You will need to Install an editor to work with IronRuby as there is no designer support for IronRuby inside Visual Studio. There are many editors to choose from but I would recommend you either went with: SciTE (Download the MSI): This is a lightweight text editor which is simple to get up and running. SciTE understands Ruby syntax and allows you to easily run IronRuby code within the editor with a small change to the config file. SharpDevelop 3.2 (Download the MSI): This is an open source development environment for C#, VB, Boo and now IronRuby. IronRuby support is new but it does include integrated debugging. You might also want to check out the main site for SharpDevelop. TIP: There are commercial tools for Ruby development which offer richer support such as intellisense.. They can be coerced into working with IronRuby. A good one to start with is RubyMine which needs some small changes to make it work with IronRuby. Step 3 – Run the IronRuby Tutorial Run through the IronRuby tutorial which is included in the IronRuby download. It covers off the basics of the Ruby languages and how IronRuby integrates with .NET. In a typical install it will end up at C:\Program Files\IronRuby 0.9.4.0\Samples\Tutorial. Which will give you the tutorial implemented in .NET and Ruby. TIP: You might also want to check out these two introductory posts Using IronRuby and .NET to produce the ‘Hello World of WPF’ and What's IronRuby, and how do I put it on Rails? Step 4 – Get some good books to read Get a great book on Ruby and IronRuby. There are several free ebooks on Ruby which will help you learn the language. The little book of Ruby is a good place to start. I would also recommend you purchase IronRuby Unleashed (Buy on Amazon UK | Buy on Amazon USA). You might also want to check out this mini-review. Other books are due out soon including IronRuby in Action. TIP: Also check out the official documentation for using .NET from IronRuby. Step 5 – Keep an eye on the team blogs Keep an eye on the IronRuby team blogs including Jimmy Schementi, Jim Deville and Tomas Matousek (full list) TIP: And keep a watch out for the final release of IronRuby – due anytime soon!

    Read the article

  • Stack Exchange Notifier Chrome Extension [v1.2.9.3 released]

    - by Vladislav Tserman
    About Stack Exchange Notifier is a handy extension for Google Chrome browser that displays your current reputation, badges on Stack Exchange sites and notifies you on reputation's changes. You will now get notified of comments on your own posts (questions and answers) and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). All StackExchange sites are supported. Screenshots Access Install extensions from Google Chrome Extension Gallery Platform Google Chrome browser extension Contact Created by me (Vladislav Tserman). I'm available at: vladjan (at) gmail.com Follow Stack Exchange Notifier on twitter to get notified about news and updates: http://twitter.com/se_notifier Code Written in Java, Google Web Toolkit under Eclipse Helios. Stack Exchange Notifier uses the Stack Exchange API and is powered by Google App Engine for Java. Changelog I will be porting extension to not use app engine back-end due to some limitations. New versions of the extension will be making direct calls to Stack Exchange API right from your browser. Please do not expect new versions of the extension any time soon. Sorry. Read more about limitations here http://stackapps.com/questions/1713 and here http://stackoverflow.com/questions/3949815 Currently, you may sometimes experience some issues using extension, but most users will have no problems. You may notice too many errors in the logs, but there is nothing I can do with this now. Thanks for using my little app, thanks to all of you it still works in spite of many issues with API Version 1.2.9.3 - Thursday, October 14, 2010 - Bug fix release (back-end improvements) Version 1.2.9.2 - Thursday, October 07, 2010 - Bug fix release (high rate of occasional API errors were noticed so some fixes added to handle them were possible) Version 1.2.9.1 - Tuesday, October 05, 2010 - Mostly bug fix release, back-end performance improvements - You will now get notified of comments on your own posts (questions and answers) that are not older than 1 year and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). This is experimental feature, let me know if you like/need it. - New 'All sites' view displays all websites from Stack Exchange network (part of new feature that is not finished yet) Version 1.2.9 - Saturday, September 25, 2010 - Fixes an issue when some users got empty Account view. - When hovering on @Username on account view the title now displays '@Username on @SiteName' to easily understand the site name Version 1.2.7 - Wednesday, September 22, 2010 - Fixed an issue with notifications. - Minor improvements Version 1.2.5 - Tuesday, September 21, 2010 - Fixed an issue where some characters in response payload raised an exception when parsing to JSON. v1.2.3 (Sunday, September 19, 2010) - Support for new OpenID providers was added (Yahoo, MyOpenID, AOL) - UI improvements - Several minor defects were fixed v1.2.2 (Thursday, September 16, 2010) - New types of notifications added. Now extension notifies you on comments that are directed to you. Comments are expandable, so clicking on comment title will expand height to accommodate all available text. - UI and error handling improvements Future Application still in beta stage. I hope you're not having any problems, but if you are, please let me know. Leave your feedback and bug reports in comments. I'm available at: vladjan (at) gmail.com. I'm working on adding new features. I want to hear from the users and incorporate as much feedback as possible into the extension. Any suggestions for improvements/features to add?

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • Adding Fake Build Information in TFS 2010

    - by Jakob Ehn
    We have been using TFS 2010 build for distributing a build in parallel on several agents, but where the actual compilation is done by a bunch of external tools and compilers, e.g. no MSBuild involved. We are using the ParallelTemplate.xaml template that Jim Lamb blogged about previously, which distributes each configuration to a different agent. We developed custom activities for running these external compilers and collecting the information and errors by reading standard out/error and pushing it back to the build log. But since we aren’t using MSBuild we don’t the get nice configuration summary section on the build summary page that we are used to. We would like to show the result of each configuration with any errors/warnings as usual, together with a link to the log file. TFS 2010 API to the rescue! What we need to do is adding information to the InformationNode structure that is associated with every TFS build. The log that you normally see in the Log view is built up as a tree structure of IBuildInformationNode objects. This structure can we accessed by using the InformationNodeConverters class. This class also contain some helper methods for creating BuildProjectNode, which contain the information about each project that was build, for example which configuration, number of errors and warnings and link to the log file. Here is a code snippet that first creates a “fake” build from scratch and the add two BuildProjectNodes, one for Debug|x86 and one for Release|x86 with some release information:   TfsTeamProjectCollection collection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://lt-jakob2010:8080/tfs")); IBuildServer buildServer = collection.GetService<IBuildServer>(); var buildDef = buildServer.GetBuildDefinition("TeamProject", "BuildDefinition"); //Create fake build with random build number var detail = buildDef.CreateManualBuild(new Random().Next().ToString()); // Create Debug|x86 project summary IBuildProjectNode buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Debug", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 1; buildProjectNode.CompilationWarnings = 1; buildProjectNode.Node.Children.AddBuildError("Compilation", "File1.cs", 12, 5, "", "Syntax error", DateTime.Now); buildProjectNode.Node.Children.AddBuildWarning("File2.cs", 3, 1, "", "Some warning", DateTime.Now, "Compilation"); buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfiledebug.txt")); buildProjectNode.Save(); // Create Releaes|x86 project summary buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Release", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 0; buildProjectNode.CompilationWarnings = 0; buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfilerelease.txt")); buildProjectNode.Save(); detail.Information.Save(); detail.FinalizeStatus(BuildStatus.Failed); When running this code, it will a create a build that looks like this: As you can see, it created two configurations with error and warning information and a link to a log file. Just like a regular MSBuild would have done. This is very useful when using TFS 2010 Build in heterogeneous environments. It would also be possible to do this when running compilations completely outside TFS build, but then push the results of the into TFS for easy access. You can push all information, including the compilation summary, drop location, test results etc using the API.

    Read the article

  • Unable to apt-get upgrade in ubuntu 11.10

    - by blackhole
    These are the errors shows by different client Update Manager: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:Method has died unexpectedly!, E:Sub-process returned an error code (100), E:Method /usr/lib/apt/methods/ did not start correctly Synaptic package Manager E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly E: Unable to lock the download directory Command: sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: libfreetype6 libfreetype6-dev 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Failed to exec method /usr/lib/apt/methods/ E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly Can anyone one tell me how to resolve these issues ? I have no volatile packages or anything so i am even posting the preview of my sources.list file. # deb cdrom:[Ubuntu 10.10 _Maverick Meerkat_ - Release i386 (20101007)]/ maverick main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse # deb-src http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu oneiric main deb-src http://extras.ubuntu.com/ubuntu oneiric main deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security main restricted deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security multiverse # deb http://archive.canonical.com/ lucid partner Here is the preview of my sources.list file

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • MySQL for Excel 1.3.0 Beta has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.0.  This is a beta release for 1.3.x. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel As this is a beta version the MySQL for Excel product can be downloaded only by using the product standalone installer at this link http://dev.mysql.com/downloads/windows/excel/ Your feedback on this beta version is very well appreciated, you can raise bugs on the MySQL bugs page or give us your comments on the MySQL for Excel forum. Changes in MySQL for Excel 1.3.0 (2014-06-06, Beta) This section documents all changes and bug fixes applied to MySQL for Excel since the release of 1.2.1. Several new features were added, for more information see What Is New In MySQL for Excel (http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel-what-is-new.html). Known limitations: Upgrading from versions MySQL for Excel 1.2.0 and lower is not possible due to a bug fixed in MySQL for Excel 1.2.1. In that scenario, the old version (MySQL for Excel 1.2.0 or lower) must be uninstalled first. Upgrading from version 1.2.1 works correctly. <CTRL> + <A> cannot be used to select all database objects. Either <SHIFT> + <Arrow Key> or <CTRL> + click must be used instead. PivotTables are normally placed to the right (skipping one column) of the imported data, they will not be created if there is another existing Excel object at that position. Functionality Added or Changed Imported data can now be refreshed by using the native Refresh feature. Fields in the imported data sheet are then updated against the live MySQL database using the saved connection ID. Functionality was added to import data directly into PivotTables, which can be created from any Import operation. Multiple objects (tables and views) can now be imported into Excel, when before only one object could be selected. Relational information is also utilized when importing multiple objects. All options now have descriptive tooltips. Hovering over an option/preference displays helpful information about its use. A new Export Data, Advanced Options option was added that shows all available data types in the Data Type combo box, instead of only showing a subset of the most popular data types. The option dialogs now include a Refresh to Defaults button that resets the dialog's options to their defaults values. Each option dialog is set individually. A new Add Summary Fields for Numeric Columns option was added to the Import Data dialog that automatically adds summary fields for numeric data after the last row of the imported data. The specific summary function is selectable from many options, such as "Total" and "Average." A new collation option was added for the schema and table creation wizards. The default schema collation is "Server Default", and the default table collation is "Schema Default". Collation options may be selected from a drop-down list of all available collations. Quick links: MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html. MySQL on Windows blog: http://blogs.oracle.com/MySQLOnWindows. MySQL for Excel forum: http://forums.mysql.com/list.php?172. MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel. Enjoy and thanks for the support! 

    Read the article

  • Silverlight Cream for January 04, 2011 -- #1022

    - by Dave Campbell
    In this Issue: Dennis Doomen, Doug Holland, Kunal Chowdhury, Sacha Barber, Paul Sheriff, Mike Snow(-2-), Peter Kuhn(-2-), and Mike Ormond. Above the Fold: Silverlight: "Silverlight: Fixing the BookShelf Sample" Peter Kuhn WP7: "Searching the Windows Phone 7 Marketplace Programmatically" Doug Holland Prism/Cinch: "PRISM 4 Custom Transitioning Region" Sacha Barber Shoutouts: Sacha Barber the author of Cinch asks for some advice from users: Cinch V2 : Question For The Reader Michael Crump introduces us to SnippetManager as a way to organize your Silverlight snippets... I'm thinking any snippet: A better way to organize your Silverlight Code Snippets. Andy Beaulieu announced an update of Physics Helper 4.2 using Farseer 3.2 ... check out the breaking changes though! Dennis Doomen blogged about a new release of his Fluent Assertions: A new year with a new release of Fluent Assertions, with a blog post about it below From SilverlightCream.com: Verifying PropertyChanged events in Silverlight using Fluent Assertions Dennis Doomen release his latest Fluent Assertions for .NET and Silverlight and wrote up a big post about the new event monitoring syntax. Searching the Windows Phone 7 Marketplace Programmatically Doug Holland has a post up on MSDN blogs talking about searching the WP7 Marketplace programmatically... ya know you should be able to do it... here's how. Beginners Guide to Visual Studio LightSwitch (Part - 5) Kunal Chowdhury has Part 5 of a tutorial series on Lightswitch up at SilverlightShow... working with custom validation this time, and for the first time in this series so far actually writes some code! PRISM 4 Custom Transitioning Region Sacha Barber took time to look at Prism4/MEF and Cinch2 and found things to be fine then wrote a custom PRISM region adaptor that uses a TransitionalElement from the Microsoft Transitionals project... code available, blog post to come. Get Application Title from Windows Phone Paul Sheriff has a cool chunk of code up... getting the Application's title programmatically... and other attributes as well, if you were wondering why you might wanna do that. Detecting Users Win7 Mobile Theme Color Mike Snow has a couple as well... first up is how to detect your user's theme... obviously useful if you wanna match it. Selecting an Item in a ComboBox after Adding Items Second for Mike Snow is a general Silverlight issue... setting the selected item on a ComboBox after filling it... if you haven't stumbled across this yet, you will... A Simplified Grid Markup Reloaded Peter Kuhn has a pair of posts up since last time... this first is an extension of Colin Eberhardt's simplified Grid markup system, but it's only useful if you don't plan on using Blend... can we get a show of hands? :) Silverlight: Fixing the BookShelf Sample Next Peter Kuhn has some changes to the Bookshelf code, but more importantly has some excelling tips about shader effects, Effects on Visual Elements and how to make best use of all the above. Displaying HTML Content in Windows Phone 7 Mike Ormond has a WP7 post up describing problems a customer had early on displaying rich text and an attempt to use the WebBrowser control to pull it off and the problems that caused... check out the resultant code, and read the comments as well. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle has just released the October 2012 Critical Patch Update and the October 2012 Critical Patch Update for Java SE.  As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE.  The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. For More Information: The October 2012 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html The October 2012 Critical Patch Update for Java SE advisory is located at http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html.  An online video about the importance of keeping up with Java releases and the use of the Java auto update is located at http://medianetwork.oracle.com/video/player/1218969104001 More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html  

    Read the article

  • E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    - by Elke Phelps (Oracle Development)
    Following up on our prior announcement for EM 11g, we're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12c. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  You may scramble data in E-Business Suite cloned environments with EM12c using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 14407414) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.  What's new with EM 12c? Some of the execution steps may also be performed with EM Command Line Interface (EM CLI).  Support of EM CLI is a new feature with the E-Business Suite Release 12.1.3 template for EM 12c.  Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1.0.2 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite

    Read the article

  • Everything You Need to Know About Monitoring Oracle GoldenGate

    - by Irem Radzik
    By Joe deBuzna Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Having over 16 years of database replication experience with 6 of those split between complex Oracle GoldenGate installations across three continents and researching monitoring requirements for both GoldenGate core replication and the GoldenGate monitoring GUIs, I've seen GoldenGate used and monitored in every way conceivable. And definite patterns have emerged. Next week at OpenWorld, on Tuesday Oct 2nd at 5pm please come by to Mascone West-3005 for "Everything you need to know about Monitoring Oracle GoldenGate"session to hear me discuss how GoldenGate customers are monitoring their implementations today, common methods and tricks, what's new in the GUIs, and a what's on the roadmap ahead. As you may have seen in previous blog posts and in our launch webcast we have now Plug-in for Oracle Enterprise Manager in addition to the new Oracle GoldenGate Monitor product. For those of you who won't be at OpenWorld, please check out our Management Pack for Oracle GoldenGate data sheet and Oracle GoldenGate 11gR2 New Features white paper to learn more about the new Oracle GoldenGate 11gR2 release. In this latest release we also have enhanced conflict detection and resolution. It is a cornerstone of any Active-Active database replication solution. And in the latest release we just took ours to the next level with built in optimized resolution routines (no more dependency on sqlexec!). At OpenWorld we have a session CON8557 - Best Practice for Conflict Detection & resolution 3:30-4:30 on Wed Oct 3rd at Mascone West- 3005. Oracle Development Manager Bharath Aleti and I will highlight the most commonly used options and best practices gained from our interaction with numerous customers and consultants. Hope you can join us next week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Pull Request Changes, Multi-Selection in Advanced View, and Advertisement Changes

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Changes In this release, we have begun to re-focus on Pull Requests to ensure a productive experience between the project users and developers. We feel we made significant progress in this area for this release and look forward to using your feedback to drive future iterations. One of the biggest hurdles people have indicated is the inability to see what a pull request includes without pulling the source down from a Mercurial client. With today’s changes, any user has the ability to view a pull request, the changesets / changes included, and perform an inline diff of the file. When a pull request is made, the CodePlex website will query for all outgoing changes from the fork to the main repository for a point-in-time comparison. Because of this point-in-time comparison… All existing pull requests created prior to this release will not have changesets associated with them. If new commits are pushed to the fork while a pull request is active, they will not appear associated with the pull request. The pull request will need to be re-submitted for them to appear. Once a pull request is created, you can “View the Pull Request” which takes you to a page that looks like As you may notice, we now display a lot more detailed information regarding that pull request including who it was requested by and when, the associated changesets, the description, who it’s assigned to (we’ll come back to this) and the listing of summarized file changes. What you’ll also notice, is that each modified file has the ability to view a diff of all changes made. When you click “(view diff)” for a file, an inline diff experience appears. This new experience allows you to quickly navigate through all of the modified files as well as viewing the various change blocks for each file. You’ll also notice as you browse through each file’s changes, we update the URL to include the file path so you can quickly send a direct link to a pull request’s file. Clicking “(close diff)” will bring you back to the original pull request view. View this pull request live on WikiPlex. Pull Request Review Assignment Another new feature we added for pull requests is the ability for project members to assign pull requests for review. Any project member has the ability to assign (and re-assign if needed) a pull request to a project member. Once the assignment has been made, that project member will be notified via email of the assignment. Once they complete the review of the pull request, they can either accept or deny it similarly to the previous process. Multi-Selection in Advanced View Filters One of the more recent requests we have heard from users is the ability multi-select advanced view filters for work items. We are happy to announce this is now possible. Simply control-click the multiple options for each filter item and your work item query will be refined as such. Should you happen to unselect all options for a given filter, it will automatically reset to the default option for that filter. Furthermore, the “Direct Link” URL will be updated to include the multi-selected options for each filter. Note: The “Direct Link” feature was released in our previous deployment, just never written about. It allows you to capture the current state of your query and send it to other individuals. Advertisement Changes Very recently, the advertiser (The Lounge) we partnered to provide advertising revenue for projects, or donated to charity, was acquired by Lake Quincy Media. There has been no change in the advertising platform offering, and all projects have been converted over to using the new infrastructure. Project owners should note the new contact information for getting paid. The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • Cloud Apps News @#OOW12

    - by Natalia Rachelson
    All eyes were on Oracle this past week and the news cycle was in full swing. What better time to make some key announcements that were guaranteed to create buzz ... and so we did. The name of the game was Cloud! Here are the key Cloud announcements for Apps, which included Fusion Tap that enables mobility across all Cloud Apps, HCM customer momentum in the Cloud, and our very first ERP Cloud Services customer. Oracle Unveils Oracle Fusion Tap for the iPadOracle Fusion Tap - Productivity Amplified Anywhere, Anytime "Both the enterprise and technology providers must recognize the need to innovate and adapt for the increasing mobility of the workforce - not just for sales teams, but across the organization," said Carter Lusher, Research Fellow and Chief Analyst of Enterprise Applications Ecosystem, Ovum. "A mobile application that quickly and powerfully allows employees to make connections, analyze data, and complete activities at any time and wherever they may be located drives new levels of business value and enhances efficiency. Frankly, mobile access is no longer a 'nice to have' but a 'must have.'"  "The mobile workforce is a business reality, and Oracle Fusion Tap is an example of how Oracle delivers mobile and cloud innovations that fundamentally improve productivity and how we work," said Chris Leone, Senior Vice President of Application Development, Oracle. "With Oracle Fusion Tap users will have an all-in-one, easily extensible app that puts mission-critical data and colleague connection at their fingertips." The entire release is available here http://www.oracle.com/us/corporate/press/1855392 Customers Live on Oracle Fusion Human Capital ManagementOracle HCM Cloud Service Helps Power HR's Contribution to the Business "More than 25 of the 100-plus customers who have selected Oracle Fusion Human Capital Management (HCM) are already live. Ardent Leisure, Peach Aviation, Toshiba Medical Systems and Zillow have deployed Oracle HCM Cloud Service and are using it to transform their HR operations. They join companies such as Principal Financial Group and Elizabeth Arden, who are already using Oracle HCM Cloud Service to help manage international growth and deliver pervasive, role-based, configurable solutions to their employees. With these recent go-lives, Oracle takes a leading position in successfully bringing live HCM customers in the cloud."  "As a technology company, Zillow looked to a partner who could scale with us. Zillow has gone live on Oracle HCM Cloud Service, which will give us the ability automate and streamline HR operations for our employees in the near future," said Sarah Bilton, Senior Director HR, Zillow. Read the entire release here http://www.oracle.com/us/corporate/press/1859573 Lending Club Selects Oracle ERP Cloud Service to Help Increase Insight and EfficienciesOracle ERP Cloud Service Provides an Open Architecture, Best-of-Breed Decision-Making, and Scalability in the Cloud "Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle ERP Cloud Service to help improve decision-making and workflow, implement robust reporting, and take advantage of the inherent scalability and cost savings provided by the cloud. With more than 76,000 borrowers and 90,000 investors Lending Club utilizes technology and innovation to reduce the cost of traditional banking and offer borrowers better rates and investors better returns.  After an extensive search, Lending Club selected Oracle ERP Cloud Service due to the breadth and depth of capabilities and ongoing innovation of Oracle ERP Cloud Service, as well as Oracle's open architecture, industry leadership and commitment to partners." "Lending Club is an innovative, data-intensive, high-growth company and we needed a solution and partner that could match us," said Carrie Dolan, CFO, Lending Club. "We conducted a thorough review of our options, and Oracle ERP Cloud Service was the clear winner in terms of capabilities and business value as well as commitment to us as a customer." Read the entire release here http://www.oracle.com/us/corporate/press/1859020

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Virtual Developer Conference On Demand - Register Updated Book: WebLogic 12c: Distinctive Recipes - Architecture, Development, Administration by Oracle ACE Director Frank Munz - Blog | YouTube Webcast: Migrating from GlassFish to WebLogic - Replay Reliance Commercial Finance Accelerates Time-to-Market, Improves IT Staff Productivity by 70% - Blog | Oracle Magazine Retrieving WebLogic Server Name and Port in ADF Application by Andrejus Baranovskis, Oracle Ace Director - Blog Using Oracle WebLogic 12c with NetBeans IDEOracle ACE Director Markus Eisele walks you through installing and configuring all the necessary components, and helps you get started with a simple Hello World project. Read the article. Video: Oracle A-Team ADF Mobile Persistence SampleThis video by Oracle Fusion Middleware A-Team architect Steven Davelaar demonstrates how to use the ADF Mobile Persistence Sample JDeveloper extension to generate a fully functional ADF Mobile application that reads and writes data using an ADF BC SOAP web service. Watch the video. Java ME 8 ReleaseDownload Java ME today! This release is an implementation of the Java ME 8 standards JSR 360 (CLDC 8) and JSR 361 (MEEP 8), and includes support of alignment with Java SE 8 language features and APIs, an enhanced services-enabled application platform, the ability to "right-size" the platform to address a wide range of target devices, and more. Learn more Download Java ME SDK 8It includes application development support for Oracle Java ME Embedded 8 platforms and includes plugins for NetBeans 8. See the Java ME 8 Developer Tools Documentation to learn JavaOne 2014 Early Bird RateRegister early to save $400 off the onsite price. With the release of Java 8 this year, we have exciting new sessions and an interactive demo space! NetBeans IDE 8.0 Patch UpdateThe NetBeans Team has released a patch for NetBeans IDE 8.0. Download it today to get fixes that enhance stability and performance. Java 8 Questions ForumFor any questions about this new release, please join the conversation on the Java 8 Questions Forum. Java ME 8: Getting Started with Samples and Demo CodeLearn in few steps how to get started with Java ME 8! The New Java SE 8 FeaturesJava SE 8 introduces enhancements such as lambda expressions that enable you to write more concise yet readable code, better utilize multicore systems, and detect more errors at compile time. See What's New in JDK 8 and the new Java SE 8 documentation portal. Pay Less for Java-Related Books!Save 20% on all new Oracle Press books related to Java. Download the free preview sampler for the Java 8 book written by Herbert Schildt, Maurice Naftain, Henrik Ebbers and J.F. DiMarzio. New book: EJB 3 in Action, Second Edition WebLogic 12c Does WebSockets Getting Started by C2B2 Video: Building Robots with Java Embedded Video: Nighthacking TV Watch presentations by Stephen Chin and community members about Java SE, Java Embedded, Java EE, Hadoop, Robots and more. Migrating the Spring Pet Clinic to Java EE 7 Trip report : Jozi JUG Java Day in Johannesburg How to Build GlassFish 4 from Source 4,000 posts later : The Aquarium WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Replicating between Cloud and On-Premises using Oracle GoldenGate

    - by Ananth R. Tiru
    Do you have applications running on the cloud that you need to connect with the on premises systems. The most likely answer to this question is an astounding YES!  If so, then you understand the importance of keep the data fresh at all times across the cloud and on-premises environments. This is also one of the key focus areas for the new GoldenGate 12c release which we announced couple of week ago via a press release. Most enterprises have spent years avoiding the data “silos” that inhibit productivity. For example, an enterprise which has adopted a CRM strategy could be relying on an on-premises based marketing application used for developing and nurturing leads. At the same time it could be using a SaaS based Sales application to create opportunities and quotes. The sales and the marketing teams which use these systems need to be able to access and share the data in a reliable and cohesive way. This example can be extended to other applications areas such as HR, Supply Chain, and Finance and the demands the users place on getting a consistent view of the data. When it comes to moving data in hybrid environments some of the key requirements include minimal latency, reliability and security: Data must remain fresh. As data ages it becomes less relevant and less valuable—day-old data is often insufficient in today’s competitive landscape. Reliability must be guaranteed despite system or connectivity issues that can occur between the cloud and on-premises instances. Security is a key concern when replicating between cloud and on-premises instances. There are several options to consider when replicating between the cloud and on-premises instances. Option 1 – Secured network established between the cloud and on-premises A secured network is established between the cloud and on-premises which enables the applications (including replication software) running on the cloud and on-premises to have seamless connectivity to other applications irrespective of where they are physically located. Option 2 – Restricted network established between the cloud and on-premises A restricted network is established between the cloud and on-premises instances which enable certain ports (required by replication) be opened on both the cloud and on the on-premises instances and white lists the IP addresses of the cloud and on-premises instances. Option 3 – Restricted network access from on-premises and cloud through HTTP proxy This option can be considered when the ports required by the applications (including replication software) are not open and the cloud instance is not white listed on the on-premises instance. This option of tunneling through HTTP proxy may be only considered when proper security exceptions are obtained. Oracle GoldenGate Oracle GoldenGate is used for major Fortune 500 companies and other industry leaders worldwide to support mission-critical systems for data availability and integration. Oracle GoldenGate addresses the requirements for ensuring data consistency between cloud and on-premises instances, thus facilitating the business process to run effectively and reliably. The architecture diagram below illustrates the scenario where the cloud and the on-premises instance are connected using GoldenGate through a secured network In the above scenario, Oracle GoldenGate is installed and configured on both the cloud and the on-premises instances. On the cloud instance Oracle GoldenGate is installed and configured on the machine where the database instance can be accessed. Oracle GoldenGate can be configured for unidirectional or bi-directional replication between the cloud and on premises instances. The specific configuration details of Oracle GoldenGate processes will depend upon the option selected for establishing connectivity between the cloud and on-premises instances. The knowledge article (ID - 1588484.1) titled ' Replicating between Cloud and On-Premises using Oracle GoldenGate' discusses in detail the options for replicating between the cloud and on-premises instances. The article can be found on My Oracle Support. To learn more about Oracle GoldenGate 12c register for our launch webcast where we will go into these new features in more detail.   You may also want to download our white paper "Oracle GoldenGate 12c Release 1 New Features Overview" I would love to hear your requirements for replicating between on-premises and cloud instances, as well as your comments about the strategy discussed in the knowledge article to address your needs. Please post your comments in this blog or in the Oracle GoldenGate public forum - https://forums.oracle.com/community/developer/english/business_intelligence/system_management_and_integration/goldengate

    Read the article

  • Seeking advice on tools and technology for my new game [closed]

    - by k.k. slider
    I'm a C# developer who has been programming a game in my spare time using XNA and Visual Studio. The game's logic is mostly done and I've completed a prototype that has most of the functionality of (what I envision to be) the final game. However, having heard about the uncertain future and (possibly) limited audience for XNA games, I'm looking to switch platforms... but I don't know what technology would best suit my needs. Below are some specifics about my game and what exactly I'm looking for, if you're interested: The game is a 2D turn-based tactical RPG (strategy game) for two players. It is a basic sprite and tile based game with animations and sound. 3D capabilities are not necessary. I'd like to allow players to compete with others online, and have a basic ranking/matchmaking system. I will probably need something that can interact with a server and a database (the game is turn-based and has no RNG, so cheating would be easy to detect even if most computation is done client-side and minimal data is sent to the server). Ideally, I would be able to release an early version of the game and have people give feedback as I develop additional features (similar to Minecraft). I'd prefer to have a way to release periodic updates to the game instead of releasing an absolute final product. To reach the widest possible audience, I'd prefer technology that allows me to release on PC, Android, iOS, and (maybe) Mac. This is a game with simple mouse inputs which can fit on a mobile touch screen. The game should be monetizable. If I find success with this game, then I may consider becoming a full-time indie game developer. I have several other game ideas and have learned quite a bit from my first attempt at game development. My first thought was an F2P/microtransaction model, but I'm open to other suggestions. Language isn't a primary concern of mine, since I have a decent amount of experience using several languages to program large projects. I'm willing to spend money (e.g. on a developer's license), but the more expensive it gets, the more hesitant I am to use it. I've looked into the following solutions... there are a LOT of tools out there... if anyone has experience with any of these and would like to recommend/reject any of them, it would be helpful. C#/.NET (XNA/MonoGame/SDL/SlimDX/Xamarin/ExEn/ANX?) HTML5/JS (AppMobi/PhoneGap/Marmalade/FlashCanvas/Cordova/libRocket?) Python (Pyglet/Pygame/Kivy?) Java (JavaFX/libGDX?) Unity/Construct 2/Cocos2D/NME/Corona/other game creation software? I'd like something that can do 2D and isn't limited by being too high-level. Other languages (Lua/LOVE? Moai?) Thanks for answering this rather long and tedious question...

    Read the article

  • BI&EPM in Focus November 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings ·       NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI:  Video ·       Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: Video  ·       Sodexo chose Oracle Exalytics as their business analytics platform:  Video ·       AstraZeneca (US, Canada, MedImmune) Improves Insight, Analytics, and Reporting, Enterprisewide with Unified Planning on a Single Platform ·       Experian Consolidates Reporting Systems for One, Global View of Financial Data and Improves Planning for Continued Growth ·       Munchkin Gives its Line of Children’s Products Plenty of Room to Grow in an Upgraded Enterprise Application Environment ·       Top 20 EPM Customer Snapshots, in One Handy Booklet (link) ·       Customer and Partner Successes: Link to Complete Archive Enterprise Performance Management ·       Nov 15: Is Hope and Email the Core of your Reconciliation Process? (link) ·       Replay: Integrated Business Planning, Featuring Leggett & Platt (link) ·       Whitepaper: The New Competitive Advantage - Strategic CIO's Embrace the Cloud (link) ·       Press: Oracle Enterprise Performance Management Driving Significant Improvements in Budget Management and Reporting for Public Sector Organizations (link) ·       Enterprise Performance Management Video Feature Overviews, Now Available on YouTube (link) ·       NEW Solution Brief - Oracle Hyperion Planning on the Oracle Exalytics In-Memory Machine (link) ·       For Insurance sector: Datasheet for new release V2.0 - Oracle Quantitative Management and Reporting for Solvency II (link) ·       Whitepaper FSN 2012: Managing Risk and Uncertainty, an Executive's Guide to Integrated Business Planning (link) ·       NEW Datasheet for Oracle Planning and Budgeting Cloud Service (link) ·       Blog: Planning in the Cloud - For Real Business Intelligence ·       Press: Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business (link) ·       Press: New Release (11.1.1.6.2BP1) of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere (link) ·       Mark Hurd Interviewed on USA Today about Big Data & Analytics (link) ·       Whitepaper: Mastering Big Data - CFO Strategies to Transform Insight into Opportunity (link) ·       Nov 15: Improve Asset Utilization. Achieve Greater Profitability: Oracle Enterprise Asset Management Analytics (link) ·       Replay: Oracle Enterprise Asset Mgmt Analytics and Oracle Manufacturing Analytics (link) ·       Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link) ·       Webcast Replay: Overview of Oracle Endeca Informational Discovery (link) ·       OBIEE 11g: Required and Recommended Patches and Patch Sets (link)

    Read the article

  • MySQL for Excel new features (1.2.0): Save and restore Edit sessions

    - by Javier Rivera
    Today we are going to talk about another new feature included in the latest MySQL for Excel release to date (1.2.0) which can be Installed directly from our MySQL Installer downloads page.Since the first release you were allowed to open a session to directly edit data from a MySQL table at Excel on a worksheet and see those changes reflected immediately on the database. You were also capable of opening multiple sessions to work with different tables at the same time (when they belong to the same schema). The problem was that if for any reason you were forced to close Excel or the Workbook you were working on, you had no way to save the state of those open sessions and to continue where you left off you needed to reopen them one by one. Well, that's no longer a problem since we are now introducing a new feature to save and restore active Edit sessions. All you need to do is in click the options button from the main MySQL for Excel panel:  And make sure the Edit Session Options (highlighted in yellow) are set correctly, specially that Restore saved Edit sessions is checked: Then just begin an Edit session like you would normally do, select the connection and schema on the main panel and then select table you want to edit data from and click over Edit MySQL Data. and just import the MySQL data into Excel:You can edit data like you always did with the previous version. To test the save and restore saved sessions functionality, first we need to save the workbook while at least one Edit session is opened and close the file.Then reopen the workbook. Depending on your version of Excel is where the next steps are going to differ:Excel 2013 extra step (first): In Excel 2013 you first need to open the workbook with saved edit sessions, then click the MySQL for Excel Icon on the the Data menu (notice how in this version, every time you open or create a new file the MySQL for Excel panel is closed in the new window). Please note that if you work on Excel 2013 with several workbooks with open edit sessions each at the same time, you'll need to repeat this step each time you open one of them: Following steps:  In Excel 2010 or previous, you just need to make sure the MySQL for Excel panel is already open at this point, if its not, please do the previous step specified above (Excel 2013 extra step). For Excel 2010 or older versions you will only need to do this previous step once.  When saved sessions are detected, you will be prompted what to do with those sessions, you can click Restore to continue working where you left off, click Discard to delete the saved sessions (All edit session information for this file will be deleted from your computer, so you will no longer be prompted the next time you open this same file) or click Nothing to continue without opening saved sessions (This will keep the saved edit sessions intact, to be prompted again about them the next time you open this workbook): And there you have it, now you will be able to save your Edit sessions, close your workbook or turn off your computer and you will still be able to reopen them in the future, to continue working right where you were. Today we talked about how you can save your active Edit sessions and restore them later, this is another feature included in the latest MySQL for Excel release (1.2.0). Please remember you can try this product and many others for free downloading the installer directly from our MySQL Installer downloads page.Happy editing !

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Does *every* project benefit from written specifications?

    - by nikie
    I know this is holy war territory, so please read the question to the end before answering. There are many cases where written specifications make a lot of sense. For example, if you're a contractor and you want to get paid, you need written specs. If you're working in a team with 20 persons, you need written specs. If you're writing a programming language compiler or interpreter (and it's not perl), you'll usually write a formal specification. I don't doubt that there are many more cases where written specifications are a really good idea. I just think that there are cases where there's so little benefit in written specs, that it doesn't outweigh the costs of writing and maintaining them. EDIT: The close votes say that "it is difficult to say what is asked here", so let me clarify: The usefulness of written, detailed specifications is often claimed like a dogma. (If you want examples, look at the comments.) But I don't see the use of them for the kind of development I'm doing. So what is asked here is: How would written specifications help me? Background information: I work for a small company that's developing vertical market software. If our product is easier to use and has better performance than the competition, it sells. If it's harder to use, even if it behaves 100% as the specification says, it doesn't sell. So there are no "external forces" for having written specs. The advantage would have to be somewhere in the development process. Now, I can see how frozen specifications would make a developer's life easier. But we'll never have frozen specs. If we see in the middle of development that feature X is not intuitive to use the way it's specified, then we can only choose between changing the specification or developing a product that won't sell. You'll probably ask by now: How do you know when you're done? Well, we're continually improving our product. The competition does the same. So (hopefully) we're never done. We keep improving the software, and when we reach a point when the benefits of the improvements we've added since the last release outweigh the costs of an update, we create a new release that is then tested, localized, documented and deployed. This also means that there's rarely any schedule pressure. Nobody has to do overtime to make a deadline. If the feature isn't done by the time we want to release the next version, it'll simply go into the next version. The next question might be: How do your developers know what they're supposed to implement? The answer is: They have a lot of domain knowledge. They know the customers business well enough, so a high-level description of the feature (or even just the problem that the customer needs solved) is enough to implement it. If it's not clear, the developer creates a few fake screens to get feedback from marketing/management or customers, but this is nowhere near the level of detail of actual specifications. This might be inefficient for larger teams, but for a small team with low turnover it works quite well. It has the additional benefit that the developer in question often comes up with a better solution than the person writing the specs might have. This question is already getting very long, but let me address one last point: Testing. Like I said in the beginning, if our software behaves 100% like the spec says, it still can be crap. In fact, if it's so unintuitive that you need a spec to know how to test it, it probably is crap. It makes sense to have fixed, written tests for some core functionality and for regression bugs, but again, this is nowhere near a full written spec of how the software should behave when. The main test is: hand the software to a user who doesn't know it yet and tell him to use the new feature X. If she can figure out how to use it and it works, it works.

    Read the article

  • Error in installing ZTE AC2738 on ubuntu 3.0.0-12-generic

    - by Netro
    I am getting this error ,struct usb_serial_driver has no member named shutdown. I am installing on 64bit ubuntu 3.0.0-12-generic ... Beginning Verify CD ... ... Verify CD Succeed! ... Beginning Copy Install Package Files ... ... will take a long time, waiting 5 seconds, please ... Copy Install Package Files Succeed! ... 'ztemtApp' previous version not found. and install now Beginning install ... ... Current linux release version is 'Ubuntu' ... Checking 'App' process ... Checking old installation ... Installing ... Current Path is : . : /tmp/ztemt_datacard/Linux 1. Checking Previous Version ... 2. Copying Data Bin ... ... will take a few seconds, please waiting ... /tmp/ztemt_datacard/Linux 3. Auto Load Usb Driver Module ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 11802 4. Changing pppd Options ... 5. Changing File Permission ... 6. Deleting Qt lib When Local QT Vertion > V4.4.0 ... ... Package 'libqtgui4' exist ... QT_VERSION = 4 7. Deleting process id file: EVDOApp.pid ... 8. Making USB Serial Driver Module : ztemt.ko ... ... will take a few seconds, please waiting ... make -C /lib/modules/3.0.0-12-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.0.0-12-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘destroy_serial’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:159:14: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:165:18: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_open’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:241:36: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:246:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:251:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:253:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 1 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 2 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:270:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:276:6: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:278:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:279:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_close’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:355:18: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:357:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:358:21: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:371:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:372:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:375:3: error: too many arguments to function ‘port->serial->type->close’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:377:11: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:378:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:379:9: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:380:8: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:386:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:407:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 1 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 2 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct usb_serial_port *’ but argument is of type ‘const unsigned char *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 3 of ‘port->serial->type->write’ makes pointer from integer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘const unsigned char *’ but argument is of type ‘int’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: error: too few arguments to function ‘port->serial->type->write’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write_room’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:429:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: warning: passing argument 1 of ‘port->serial->type->write_room’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_chars_in_buffer’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:451:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: warning: passing argument 1 of ‘port->serial->type->chars_in_buffer’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_throttle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:472:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: warning: passing argument 1 of ‘port->serial->type->throttle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_unthrottle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:491:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: warning: passing argument 1 of ‘port->serial->type->unthrottle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_ioctl’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:511:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 1 of ‘port->serial->type->ioctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 2 of ‘port->serial->type->ioctl’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: error: too many arguments to function ‘port->serial->type->ioctl’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_set_termios’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:535:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 1 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 2 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct termios *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: error: too few arguments to function ‘port->serial->type->set_termios’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_break’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:554:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: warning: passing argument 1 of ‘port->serial->type->break_ctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmget’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:629:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: warning: passing argument 1 of ‘port->serial->type->tiocmget’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: error: too many arguments to function ‘port->serial->type->tiocmget’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmset’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:651:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 1 of ‘port->serial->type->tiocmset’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 2 of ‘port->serial->type->tiocmset’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: error: too many arguments to function ‘port->serial->type->tiocmset’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_port_work’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:697:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘port_release’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:709:2: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_probe’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:858:2: error: implicit declaration of function ‘lock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:861:3: error: implicit declaration of function ‘unlock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1034:3: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:23: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:51: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1183:3: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_disconnect’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1257:13: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1258:21: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: At top level: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: (near initialization for ‘serial_ops.set_termios’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: (near initialization for ‘serial_ops.break_ctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: error: unknown field ‘read_proc’ specified in initializer /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: (near initialization for ‘serial_ops.tiocmget’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: (near initialization for ‘serial_ops.tiocmset’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_init’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1352:2: error: implicit declaration of function ‘info’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘fixup_generic’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: error: ‘usb_serial_generic_shutdown’ undeclared (first use in this function) /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: note: each undeclared identifier is reported only once for each function it appears in cc1: some warnings being treated as errors make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.0.0-12-generic' make: *** [modules] Error 2 Install finished! Any suggestion? Update: from installation document , In some special cases, the setup package can’t automatically compile the driver, so you need change the configurations yourself and manually compile the driver. Method: enter the directory: /usr/local/bin/ztemtApp/zteusbserial, and find the corresponding kernel version of current system. I can see 2.6.27 2.6.28 2.6.29 2.6.30 2.6.31 2.6.32 2.6.33 2.6.34 2.6.35 2.6.36 2.6.37 2.6.38 2.6.39 below2.6.2 in the directory. Which version shall I pick up for installation? Readme file says, we have to do this to insert ztemt.ko in kernel.

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • Know more about Enqueue Deadlock Detection

    - by Liu Maclean(???)
    ??? ORACLE ALLSTAR???????????????????,??????? ???????enqueue lock?????????3 ??????,????????????????????????????ora-00060 dead lock??process???3s: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com PROCESS A: set timing on; update maclean1 set t1=t1+1; PROCESS B: update maclean2 set t1=t1+1; PROCESS A: update maclean2 set t1=t1+1; PROCESS B: update maclean1 set t1=t1+1; ??3s? PROCESS A ?? ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:03.02 ????Process A????????????? 3s,?????????????,??????? ?????????? ???????: SQL> col name for a30 SQL> col value for a5 SQL> col DESCRIB for a50 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm='_enqueue_deadlock_scan_secs'; NAME VALUE DESCRIB ------------------------------ ----- -------------------------------------------------- _enqueue_deadlock_scan_secs 0 deadlock scan interval SQL> alter system set "_enqueue_deadlock_scan_secs"=18 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. PROCESS A: SQL> set timing on; SQL> update maclean1 set t1=t1+1; 1 row updated. Elapsed: 00:00:00.06 Process B SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Process A: SQL> SQL> alter session set events '10704 trace name context forever,level 10:10046 trace name context forever,level 8'; Session altered. SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource  Elapsed: 00:00:18.05 ksqcmi: TX,90011,4a9 mode=6 timeout=21474836 WAIT #12: nam='enq: TX - row lock contention' ela= 2930070 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114759849120 WAIT #12: nam='enq: TX - row lock contention' ela= 2930636 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114762779801 WAIT #12: nam='enq: TX - row lock contention' ela= 2930439 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114765710430 *** 2012-06-12 09:58:43.089 WAIT #12: nam='enq: TX - row lock contention' ela= 2931698 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114768642192 WAIT #12: nam='enq: TX - row lock contention' ela= 2930428 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114771572755 WAIT #12: nam='enq: TX - row lock contention' ela= 2931408 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114774504207 DEADLOCK DETECTED ( ORA-00060 ) [Transaction Deadlock] The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: ??????Process A?’enq: TX – row lock contention’ ?????ORA-00060 deadlock detected????3s ??? 18s , ???hidden parameter “_enqueue_deadlock_scan_secs”?????,????????0? ??????????: SQL> alter system set "_enqueue_deadlock_scan_secs"=4 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> alter system set "_enqueue_deadlock_time_sec"=9 scope=spfile; System altered. Elapsed: 00:00:00.00 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 SQL> set timing on SQL> select * from maclean1 for update wait 8; T1 ---------- 11 Elapsed: 00:00:00.01 PROCESS B SQL> select * from maclean2 for update wait 8; T1 ---------- 3 SQL> select * from maclean1 for update wait 8; select * from maclean1 for update wait 8 PROCESS A SQL> select * from maclean2 for update wait 8; select * from maclean2 for update wait 8 * ERROR at line 1: ORA-30006: resource busy; acquire with WAIT timeout expired Elapsed: 00:00:08.00 ???????? ??? select for update wait?enqueue request timeout ?????8s? ,???????”_enqueue_deadlock_scan_secs”=4(deadlock scan interval),?4s???deadlock detected,????Process A????deadlock ???, ??????? ??Process A?????8s?raised??”ORA-30006: resource busy; acquire with WAIT timeout expired”??,??ORA-00060,?????process A???????? ????????”_enqueue_deadlock_time_sec”(requests with timeout <= this will not have deadlock detection)???,?enqueue request time < “_enqueue_deadlock_time_sec”?Server process?????dead lock detection,?????????enqueue request ??????timeout??????(_enqueue_deadlock_time_sec????5,?timeout<5s),???????????????;??????timeout>”_enqueue_deadlock_time_sec”???,Oracle????????????????????? ??????????: SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 Process A: SQL> set timing on; SQL> select * from maclean1 for update wait 10; T1 ---------- 11 Process B: SQL> select * from maclean2 for update wait 10; T1 ---------- 3 SQL> select * from maclean1 for update wait 10; PROCESS A: SQL> select * from maclean2 for update wait 10; select * from maclean2 for update wait 10 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:06.02 ??????? select for update wait 10?10s??, ?? 10s?????_enqueue_deadlock_time_sec???(9s),??Process A???????? ???????????????6s ???????_enqueue_deadlock_scan_secs?4s ? ???????????,???????????_enqueue_deadlock_scan_secs?????????3???? ??: enqueue lock?????????????? 1. ?????????deadlock detection??3s????, ????????_enqueue_deadlock_scan_secs(deadlock scan interval)???,??????0,????????_enqueue_deadlock_scan_secs?????????3???, ?_enqueue_deadlock_scan_secs=0 ??3s??, ?_enqueue_deadlock_scan_secs=4??6s??,????? 2. ???????_enqueue_deadlock_time_sec(requests with timeout <= this will not have deadlock detection)???,?enqueue request timeout< _enqueue_deadlock_time_sec(????5),?Server process?????????enqueue request timeout>_enqueue_deadlock_time_sec ????_enqueue_deadlock_scan_secs???????, ??request timeout??????select for update wait [TIMEOUT]??? ??: ???10.2.0.1?????????2?hidden parameter , ???patchset 10.2.0.3????? _enqueue_deadlock_time_sec, ?patchset 10.2.0.5??????_enqueue_deadlock_scan_secs? ?????RAC???????????10s, ???????_lm_dd_interval(dd time interval in seconds) ,????????8.0.6???? ???????????????,??????,  ?10g???????60s,?11g???????10s?  ???????11g??_lm_dd_interval?????????????,?????11g??LMD????????????,??????????RAC?LMD?Deadlock Detection???????CPU,???11g?Oracle????Team???LMD????????CPU????: ????????11g?LMD???????,???????11g??? UTS TRACE ????? DD???: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> SQL> select * from global_name 2 ; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> alter system set "_lm_dd_interval"=20 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2228704 bytes Variable Size 1325403680 bytes Database Buffers 234881024 bytes Redo Buffers 7495680 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter lm_dd NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _lm_dd_interval integer 20 SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 instance 1: SQL> oradebug setorapid 12 Oracle pid: 12, Unix process pid: 8608, image: [email protected] (LMD0) ? LMD0??? UTS TRACE??RAC???????????? SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. Elapsed: 00:00:00.00 SQL> update maclean1 set t1=t1+1; 1 row updated. instance 2: SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Instance 1: SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:20.51 LMD0???UTS TRACE 2012-06-12 22:27:00.929284 : [kjmpbmsg:process][type 22][msg 0x7fa620ac85a8][from 1][seq 8148.0][len 192] 2012-06-12 22:27:00.929346 : [kjmxmpm][type 22][seq 0.0][msg 0x7fa620ac85a8][from 1] *** 2012-06-12 22:27:00.929 * kjddind: received DDIND msg with subtype x6 * reqp->dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: 2012-06-12 22:27:00.929346*: kjddind: req->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: >> DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929346*: lock [0x95023930,829], op = [mast] 2012-06-12 22:27:00.929346*: reqp->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_REMOTE 2012-06-12 22:27:00.929346*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopr[TX 0xe000c.0x32][ext 0x5,0x0]: blocking lock 0xbbb9a800, owner 2097154 of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopt: converting lock 0xbbce92f8 on 'TX' 0x80016.0x5d4,txid [2097154,34]of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929855 : GSIPC:AMBUF: rcv buff 0x7fa620aa8cd8, pool rcvbuf, rqlen 1102 2012-06-12 22:27:00.929878 : GSIPC:GPBMSG: new bmsg 0x7fa620aa8d48 mb 0x7fa620aa8cd8 msg 0x7fa620aa8d68 mlen 192 dest x100 flushsz -1 2012-06-12 22:27:00.929878*: << DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 2->1,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929878*: lock [0xbbce92f8,287], op = [mast] 2012-06-12 22:27:00.929878*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929923 : [kjmpbmsg:compl][msg 0x7fa620ac8588][typ p][nmsgs 1][qtime 0][ptime 0] 2012-06-12 22:27:00.929947 : GSIPC:PBAT: flush start. flag 0x79 end 0 inc 4.4 2012-06-12 22:27:00.929963 : GSIPC:PBAT: send bmsg 0x7fa620aa8d48 blen 224 dest 1.0 2012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 2012-06-12 22:27:00.929996 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 0, dcx 0xbc517770, inst 1, rcvr 0 qlen 0 1 2012-06-12 22:27:00.930014 : GSIPC:BSEND: no batch1 msg 0x7fa620aa8d48 type 65521 len 224 dest (1:0) 2012-06-12 22:27:00.930088 : kjbsentscn[0x0.3f72dc][to 1] 2012-06-12 22:27:00.930144 : GSIPC:SENDM: send msg 0x7fa620aa8d48 dest x10000 seq 8325 type 65521 tkts x1 mlen xe00110 2012-06-12 22:27:00.930531 : GSIPC:KSXPCB: msg 0x7fa620aa8d48 status 30, type 65521, dest 1, rcvr 0 WAIT #0: nam='ges remote message' ela= 1372 waittime=80 loop=0 p3=74 obj#=-1 tim=1339554420931640 2012-06-12 22:27:00.931728 : GSIPC:RCVD: ksxp msg 0x7fa620af6490 sndr 1 seq 0.8149 type 65521 tkts 1 2012-06-12 22:27:00.931746 : GSIPC:RCVD: watq msg 0x7fa620af6490 sndr 1, seq 8149, type 65521, tkts 1 2012-06-12 22:27:00.931763 : GSIPC:RCVD: seq update (0.8148)->(0.8149) tp -15 fg 0x4 from 1 pbattr 0x0 2012-06-12 22:27:00.931779 : GSIPC:TKT: collect msg 0x7fa620af6490 from 1 for rcvr 0, tickets 1 2012-06-12 22:27:00.931794 : kjbrcvdscn[0x0.3f72dc][from 1][idx 2012-06-12 22:27:00.931810 : kjbrcvdscn[no bscn dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 2012-06-12 22:27:00.932058*: kjddind: req->timestamp [0.15], kjddt [0.15] 2012-06-12 22:27:00.932058*: >> DDmsg:KJX_DD_VALIDATE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.932058*: lock [(nil),0], op = [vald_dd] 2012-06-12 22:27:00.932058*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_VALIDATE *** 2012-06-12 22:27:00.932 * kjddvald called: kjxmddi stuff: * cont_lockp (nil) * dd_lockp 0x95023930 * dd_inst 1 * dd_master_inst 1 * sgh graph: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 POP WFG NODE: lock=(nil) * kjddvald: dump the PRQ: BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 * kjddvald: KJDD_NXTONOD ->node_kjddsg.dinst_kjddnd =1 * kjddvald: ... which is not my node, my subgraph is validated but the cycle is not complete Global blockers dump start:--------------------------------- DUMP LOCAL BLOCKER/HOLDER: block level 5 res [0x80016][0x5d4],[TX][ext 0x2,0x0] ??dead lock!!! ???????11.2.0.3???? RAC LMD???????????”_lm_dd_interval”????????????20s?  ???????10g?_lm_dd_interval???60s,??????Processes?????????????????,????????????Server Process????????60s??????11g?????(??????LMD???????)???????,???????????10s??? Enqueue Deadlock Detection? ?11g??? RAC?LMD???????hidden parameter ????”_lm_dd_interval”???,RAC????????????????,???????????: SQL> col name for a50 SQL> col describ for a60 SQL> col value for a20 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm like '_lm_dd%'; NAME VALUE DESCRIB -------------------------------------------------- -------------------- ------------------------------------------------------------ _lm_dd_interval 20 dd time interval in seconds _lm_dd_scan_interval 5 dd scan interval in seconds _lm_dd_search_cnt 3 number of dd search per token get _lm_dd_max_search_time 180 max dd search time per token _lm_dd_maxdump 50 max number of locks to be dumped during dd validation _lm_dd_ignore_nodd FALSE if TRUE nodeadlockwait/nodeadlockblock options are ignored 6 rows selected.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >