Search Results

Search found 25554 results on 1023 pages for 'oracle solaris 11 express'.

Page 363/1023 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • ?12c????RAC Cluster Hub Node-Leaf Node

    - by Liu Maclean(???)
    ?12c?cluster?????????????,?????????????flex cluster?flux asm?? ??Hub Node?Leaf Node,?????Hub Node?Leaf Node. Hub Node????: A node in and Oracle Flex Cluster that is tightly connected with other servers and has direct access to a shared disk. Leaf Node????: Servers that are loosely coupled with Hub Nodes, which may not have direct access to the shared storage. ?????????? Leaf Node??????shared storage ,????leaf node??share disk?? ??Hub Node?12c?????cluster node???, ?Leaf Node????? Leaf Node???: ? Hub Node?? ?????cluster?? ????????Hub Node ????Hub Node????? Hub Node????????????Leaf Node??? ??????????? ?Hub Node????? ??Leaf Node??Flex Cluster???????: hub-and-spoke???cluster?????????? ????Hub Node????OCR?Votedisk ????HUB node???,???????clusterware?????,??ocr?Votedisk ? ?????????????? ??????????,???????? ????????,12???Flex cluster??12?????, ???????? [ n * (n-1)]/2?66?????? ???1000?????,?????????????40?Hub Node,???Hub Node??24?Leaf Node,?Flex Cluster???1740??????  ????,??Cluster??499500?????? ?Flex Cluster??????????????,??cluster software????? ??Hub Node ?? ????????? , ??????????relocate???Hub Node ?Hub Node???Leaf Node??????,????????relocate???Leaf Node? ??Leaf Node?? ?????????,????????relocate????Leaf Node?

    Read the article

  • How can i count the number of unique instances of IP address in the following string in ruby

    - by kamal
    "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" "10.1.3.1" nil "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" "10.1.3.4" nil "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" "10.1.3.10" nil "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" "10.1.3.11" nil "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" "10.1.3.12" nil "10.1.3.30" "10.1.3.30" nil "10.1.3.38" "10.1.3.38" "10.1.3.38" "10.1.3.38" "10.1.3.38" nil "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" "10.1.3.55" nil "10.1.3.60" "10.1.3.60" "10.1.3.60" "10.1.3.60" "10.1.3.60" "10.1.3.60" "10.1.3.60" nil "10.1.3.66" "10.1.3.66" "10.1.3.66" "10.1.3.66" "10.1.3.66" "10.1.3.66" "10.1.3.66" nil "10.1.3.101" "10.1.3.101" nil "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" "10.1.3.102" nil "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" "10.1.3.103" nil "10.1.3.104" "10.1.3.104" nil "10.1.3.106" "10.1.3.106" nil "10.1.3.107" "10.1.3.107" "10.1.3.107" "10.1.3.107" "10.1.3.107" "10.1.3.107" "10.1.3.107" nil "10.1.3.108" "10.1.3.108" "10.1.3.108" "10.1.3.108" "10.1.3.108" "10.1.3.108" nil "10.1.3.110" "10.1.3.110" "10.1.3.110" "10.1.3.110" "10.1.3.110" nil the above string is stdout of: #!/usr/bin/ruby require "rubygems" require "fastercsv" scannedIPs = Hash.new(0) count = 0 FCSV.foreach("HOUND-1.csv", :headers => true, :skip_blanks => false) do |row| text = row[1] p text end

    Read the article

  • How to find the insertion point in an array using binary search?

    - by ????
    The basic idea of binary search in an array is simple, but it might return an "approximate" index if the search fails to find the exact item. (we might sometimes get back an index for which the value is larger or smaller than the searched value). For looking for the exact insertion point, it seems that after we got the approximate location, we might need to "scan" to left or right for the exact insertion location, so that, say, in Ruby, we can do arr.insert(exact_index, value) I have the following solution, but the handling for the part when begin_index >= end_index is a bit messy. I wonder if a more elegant solution can be used? (this solution doesn't care to scan for multiple matches if an exact match is found, so the index returned for an exact match may point to any index that correspond to the value... but I think if they are all integers, we can always search for a - 1 after we know an exact match is found, to find the left boundary, or search for a + 1 for the right boundary.) My solution: DEBUGGING = true def binary_search_helper(arr, a, begin_index, end_index) middle_index = (begin_index + end_index) / 2 puts "a = #{a}, arr[middle_index] = #{arr[middle_index]}, " + "begin_index = #{begin_index}, end_index = #{end_index}, " + "middle_index = #{middle_index}" if DEBUGGING if arr[middle_index] == a return middle_index elsif begin_index >= end_index index = [begin_index, end_index].min return index if a < arr[index] && index >= 0 #careful because -1 means end of array index = [begin_index, end_index].max return index if a < arr[index] && index >= 0 return index + 1 elsif a > arr[middle_index] return binary_search_helper(arr, a, middle_index + 1, end_index) else return binary_search_helper(arr, a, begin_index, middle_index - 1) end end # for [1,3,5,7,9], searching for 6 will return index for 7 for insertion # if exact match is found, then return that index def binary_search(arr, a) puts "\nSearching for #{a} in #{arr}" if DEBUGGING return 0 if arr.empty? result = binary_search_helper(arr, a, 0, arr.length - 1) puts "the result is #{result}, the index for value #{arr[result].inspect}" if DEBUGGING return result end arr = [1,3,5,7,9] b = 6 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = 6 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [] b = 60 arr.insert(binary_search(arr, b), b) p arr and result: Searching for 6 in [1, 3, 5, 7, 9] a = 6, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = 6, arr[middle_index] = 7, begin_index = 3, end_index = 4, middle_index = 3 a = 6, arr[middle_index] = 5, begin_index = 3, end_index = 2, middle_index = 2 the result is 3, the index for value 7 [1, 3, 5, 6, 7, 9] Searching for 6 in [1, 3, 5, 7, 9, 11] a = 6, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = 6, arr[middle_index] = 9, begin_index = 3, end_index = 5, middle_index = 4 a = 6, arr[middle_index] = 7, begin_index = 3, end_index = 3, middle_index = 3 the result is 3, the index for value 7 [1, 3, 5, 6, 7, 9, 11] Searching for 60 in [1, 3, 5, 7, 9] a = 60, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = 60, arr[middle_index] = 7, begin_index = 3, end_index = 4, middle_index = 3 a = 60, arr[middle_index] = 9, begin_index = 4, end_index = 4, middle_index = 4 the result is 5, the index for value nil [1, 3, 5, 7, 9, 60] Searching for 60 in [1, 3, 5, 7, 9, 11] a = 60, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = 60, arr[middle_index] = 9, begin_index = 3, end_index = 5, middle_index = 4 a = 60, arr[middle_index] = 11, begin_index = 5, end_index = 5, middle_index = 5 the result is 6, the index for value nil [1, 3, 5, 7, 9, 11, 60] Searching for -60 in [1, 3, 5, 7, 9] a = -60, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 1, middle_index = 0 a = -60, arr[middle_index] = 9, begin_index = 0, end_index = -1, middle_index = -1 the result is 0, the index for value 1 [-60, 1, 3, 5, 7, 9] Searching for -60 in [1, 3, 5, 7, 9, 11] a = -60, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 1, middle_index = 0 a = -60, arr[middle_index] = 11, begin_index = 0, end_index = -1, middle_index = -1 the result is 0, the index for value 1 [-60, 1, 3, 5, 7, 9, 11] Searching for -60 in [1] a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 0, middle_index = 0 the result is 0, the index for value 1 [-60, 1] Searching for 60 in [1] a = 60, arr[middle_index] = 1, begin_index = 0, end_index = 0, middle_index = 0 the result is 1, the index for value nil [1, 60] Searching for 60 in [] [60]

    Read the article

  • Makefile error: Unexpected end of line seen

    - by Winston C. Yang
    Trying to install Git, I ran configure and make, but got the following error message: make: Fatal error in reader: Makefile, line 221: Unexpected end of line seen The Makefile looks like: 218: GIT-VERSION-FILE: FORCE 219: @$(SHELL_PATH) ./GIT-VERSION-GEN 220: -include GIT-VERSION-FILE 221: 222: uname_S := $(shell sh -c 'uname -s 2>/dev/null øø echo not') What's causing the error? The following information may or may not be relevant: I tried to install Git 1.7.0.3 on SunOS 5.9 (Solaris 9) in a directory in my account. The gcc version is 3.4.2 (older then the version of 3.4.6 stated by sunfreeware.com). I don't have root privileges.

    Read the article

  • Error premature end of file pops up when accessing a URL

    - by kayteen
    Hi, I am using Coldfsuion 8.0.1 and Solaris 10 and when i try to run this URL, http://IPADDRESS/flex2gateway/http I am receiving an error message "Premature end of file". Please help me out if i am missing any installation/fix. Error details: [Flex] Premature end of file. flex.messaging.MessageException: Premature end of file. at flex.messaging.io.amfx.AmfxMessageDeserializer.fatalError(AmfxMessageDeserializer.java:249) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at javax.xml.parsers.SAXParser.parse(SAXParser.java:198) at flex.messaging.io.amfx.AmfxMessageDeserializer.parse(AmfxMessageDeserializer.java:103) at flex.messaging.io.amfx.AmfxMessageDeserializer.readMessage(AmfxMessageDeserializer.java:90) at flex.messaging.endpoints.amf.SerializationFilter.invoke(SerializationFilter.java:113)

    Read the article

  • SharePoint Visual web part and Oracle connection problem

    - by Rishi
    Hi, I'm trying to build a "visual web part" for SharePoint 2010 which should connect to Oracle table and display records on SharePoint page.For development, Oracle 11g client (with ODP.net) ,SharePoint server 2010, Visual Studio 2010 and Oracle 10g express all running on my machine. First,I've written sample code in ASP.NET web app to connect my local Oracle table and display data in grid view and it works fine. My code is , OracleConnection con; try { // Connect string constr = "Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=XE)));User Id=SYSTEM; Password=password"; con = new OracleConnection(constr); //Open database connection con.Open(); // Execute a SQL SELECT OracleCommand cmd = new OracleCommand("select * from T_ACTIONPOINTS WHERE AP_STATUS='Active' ", con); OracleDataReader dr = cmd.ExecuteReader(); GridView.DataSource = dr; GridView.DataBind(); GridView.AllowPaging = true; } catch (Exception e) { lblError.Text = e.Message; } Now, I'm trying to create new "SharePoint" visual web part project and using same code and deploying it on my local SP server. But when it runs , I get following error here is my solution explorer, It looks something wrong in compatibility.Can someone point me in right direction ?

    Read the article

  • Jrun Server crashes when a page has cfform,cfgrid,cflayout etc..

    - by kayteen
    Hi, I am having a weird problem. I have a application that works perfect in my development machine and UAT machine which is of windows 2003 server/cf8. When i uploaded the same application on Solaris box with CF8, and access the site it works perfect until i hit the page that has CFFORM, CFLAYOUT, CFGRID.. etc.. The Jrun Server just crashes[jrpp-2 unexpected constant#48...]. There is nothing available in any of the logs. Please help me how to resolve this thing...!! Thanks, Bittoo More Info: http://forums.adobe.com/thread/605411?tstart=0

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • How can I temporarily redirect printf output to a c-string?

    - by Ben S
    I'm writing an assignment which involves adding some functionality to PostgreSQL on a Solaris box. As part of the assignment, we need to print some information on the client side (i.e.: using elog.) PostgreSQL already has lots of helper methods which print out the required information, however, the helper methods are packed with hundreds of printf calls, and the elog method only works with c-style strings. Is there I way that I could temporarily redirect printf calls to a buffer so I could easily send it over elog to the client? If that's not possible, what would be the simplest way to modify the helper methods to end up with a buffer as output?

    Read the article

  • Connect to a remote Oracle 11g server using OracleClient of .NET 2.0

    - by Raghu M
    I have to connect to a Oracle server on the network using a .NET / C# (Winform) application. I am trying to use System.Data.OracleClient but in vain. Here are the details I can possibly think of (that might help someone reading this question): Platform: Visual Studio 2005 / .NET 2.0 with C# on Windows Vista Home Premium Library: System.Data.OracleClient Server: Oracle 11g (located on the same LAN) Please note that I don't have Oracle installed locally and I have hunted every discussion forum possible for help - but most of them assume local Oracle installation! Here is my connection string: "User Id=TSUSER;Password=ts12TS;Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyServerIP)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL)));" And I get this error: OCIEnvCreate failed with return code -1 but error message text was not available. Stack trace: at System.Data.OracleClient.OciHandle..ctor(OciHandle parentHandle, HTYPE handleType, MODE ocimode, HANDLEFLAG handleflags) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at DGKit.Util.DataUtil.Generate() in D:\SVNRoot\sandbox\DGDev\Util\DataUtil.cs:line 68

    Read the article

  • Any book that covers internals of recent versions of Unix OS

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System (1996) "Unix Internals: The New Frontiers" by Uresh Vahalia (1996) (See here for 2nd edition) are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. So, Is there any recent books that covers internals of recent Unix OS? How about books on other Unix operating systems? They seem to be recent than above books but how close are they to OpenBSD/FreeBSD? Solaris 10 and OpenSolaris Kernel Architecture, 2 edition (July 20, 2006) HP-UX 11i Internals (February 1, 2004) I really don't prefer HP-UX as its not open source.

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Minimizing Java Thread Context Switching Overhead

    - by binil
    I have a Java application running on Sun 1.6 32-bit VM/Solaris 10 (x86)/Nahelem 8-core(2 threads per core). A specific usecase in the application is to respond to some external message. In my performance test environment, when I prepare and send the response in the same thread that receives the external input, I get about 50 us advantage than when I hand off the message to a separate thread to send the response. I use a ThreadPoolExecutor with a SynchronousQueue to do the handoff. In your experience what is the acceptable delay between scheduling a task to a thread pool and it getting picked up for execution? What ideas had worked for you in the past to try improve this?

    Read the article

  • perl script grabbing environment vars from "someplace else"

    - by Michael Wilson
    On a Solaris box in a "mysterious production system" I'm running a perl script that references an environment variable. No big deal. The contents of that variable from the shell both pre and post execution are what I expect. However, when reported by the script, it appears as though it's running in some other sub-shell which is clobbering my vars with different values for the duration of the script. Unfortunately I really can't paste the code. I'm trying to get an atomic case, but I'm at my wit's end here. Thoughts?

    Read the article

  • Is my Perl script grabbing environment variabless from "someplace else"?

    - by Michael Wilson
    On a Solaris box in a "mysterious production system" I'm running a Perl script that references an environment variable. No big deal. The contents of that variable from the shell both pre- and post-execution are what I expect. However, when reported by the script, it appears as though it's running in some other sub-shell which is clobbering my vars with different values for the duration of the script. Unfortunately I really can't paste the code. I'm trying to get an atomic case, but I'm at my wit's end here.

    Read the article

  • String Manipulation in Bash

    - by user348000
    Hello- I am a newbie in Bash and I am doing some string manipulation. I have the following file among other files in my directory: jdk-6u20-solaris-i586.sh I am doing the following to get jdk-6u20 in my script: myvar=`ls -la | awk '{print $9}' | egrep "i586" | cut -c1-8` echo $myvar but now I want to convert jdk-6u20 to jdk1.6.0_20. I can't seem to figure out how to do it. It must be as generic as possible. For example if I had jdk-6u25, I should be able to convert it at the same way to jdk1.6.0_25 so on and so forth Any suggestions?

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: RMAN run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraRacDBName=BBDB, CVOraRACDBClientName=BBDB)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; }2 3 4 allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraSID=BBPROD)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; } allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • How to fix Ogre3d segfault with std::_Rb_tree_insert_and_rebalance ?

    - by Balázs Béla
    Hello all. I'm working on a 3d music visualizer using Ogre3d, basically it's a spectrum analizer, a lot like the old xmms plugin: (http)://www.youtube.com/watch?v=_6NKBiwYN24 It works well, the bars are drawn and updated, there are no framerate issues, but it crashes randomly. Sometimes it can run without problems, finish the song, other times it crashes instantly, other times the music just stops, without a crash. Here is the source code for the main class : https://github.com/balazsbela/OgreVisualizer/blob/master/src/VisualizerApplication.cpp#L221 Also the crashes seem to happen less often when I display the framerate overlay from Ogre samples. Would limiting the framerate help ? The crashes are seemingly random. Is it a performance issue ? Please help me out, I'm quite lost on this one, I also posted on Ogre3d forums but I received no responses. (http)://www.ogre3d.org/forums/viewtopic.php?f=2&t=63207 I also tried stackoverflow: (http)://stackoverflow.com/questions/5050147/how-to-fix-ogre3d-segfault-with-std-rb-tree-insert-and-rebalance Thank you. Backtrace: balazsbela@darknet:~/workspace/OgreVisualizer/Release$ gdb OgreVisualizer core GNU gdb (GDB) 7.2-debian Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i486-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /home/balazsbela/workspace/OgreVisualizer/Release/OgreVisualizer...done. [New Thread 17705] [New Thread 17702] [New Thread 17703] [New Thread 17700] Reading symbols from /usr/lib/libv4l/v4l1compat.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libv4l/v4l1compat.so Reading symbols from /usr/local/lib/libOgreMain.so.1.7.1...done. Loaded symbols for /usr/local/lib/libOgreMain.so.1.7.1 Reading symbols from /usr/lib/libfftw3.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libfftw3.so.3 Reading symbols from /usr/lib/libSDL_sound-1.0.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libSDL_sound-1.0.so.1 Reading symbols from /usr/lib/libSDL-1.2.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libSDL-1.2.so.0 Reading symbols from /usr/lib/libSDL_mixer-1.2.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libSDL_mixer-1.2.so.0 Reading symbols from /usr/lib/libOIS-1.2.0.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libOIS-1.2.0.so Reading symbols from /usr/lib/libstdc++.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libstdc++.so.6 Reading symbols from /lib/i686/cmov/libm.so.6...Reading symbols from /usr/lib/debug/lib/i686/cmov/libm-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libm.so.6 Reading symbols from /lib/libgcc_s.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libgcc_s.so.1 Reading symbols from /lib/i686/cmov/libc.so.6...Reading symbols from /usr/lib/debug/lib/i686/cmov/libc-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libc.so.6 Reading symbols from /lib/i686/cmov/libpthread.so.0...Reading symbols from /usr/lib/debug/lib/i686/cmov/libpthread-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libpthread.so.0 Reading symbols from /usr/local/lib/libv4l1.so.0...done. Loaded symbols for /usr/local/lib/libv4l1.so.0 Reading symbols from /usr/lib/libfreetype.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libfreetype.so.6 Reading symbols from /usr/lib/libSM.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libSM.so.6 Reading symbols from /usr/lib/libICE.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libICE.so.6 Reading symbols from /usr/lib/libX11.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libX11.so.6 Reading symbols from /usr/lib/libXext.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXext.so.6 Reading symbols from /usr/lib/libXt.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXt.so.6 Reading symbols from /usr/lib/libXaw.so.7...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXaw.so.7 Reading symbols from /lib/i686/cmov/libdl.so.2...Reading symbols from /usr/lib/debug/lib/i686/cmov/libdl-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libdl.so.2 Reading symbols from /usr/lib/libboost_thread.so.1.42.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libboost_thread.so.1.42.0 Reading symbols from /usr/lib/libboost_date_time.so.1.42.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libboost_date_time.so.1.42.0 Reading symbols from /usr/lib/libfreeimage.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libfreeimage.so.3 Reading symbols from /usr/lib/libzzip-0.so.13...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libzzip-0.so.13 Reading symbols from /usr/lib/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libz.so.1 Reading symbols from /usr/lib/libsmpeg-0.4.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libsmpeg-0.4.so.0 Reading symbols from /usr/lib/libmikmod.so.2...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libmikmod.so.2 Reading symbols from /usr/lib/libvorbis.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libvorbis.so.0 Reading symbols from /usr/lib/libvorbisfile.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libvorbisfile.so.3 Reading symbols from /usr/lib/libFLAC.so.8...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libFLAC.so.8 Reading symbols from /usr/lib/libogg.so.0...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libogg.so.0 Reading symbols from /usr/lib/sse2/libspeex.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/sse2/libspeex.so.1 Reading symbols from /usr/lib/libasound.so.2...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libasound.so.2 Reading symbols from /lib/i686/cmov/librt.so.1...Reading symbols from /usr/lib/debug/lib/i686/cmov/librt-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/librt.so.1 Reading symbols from /usr/lib/libdirectfb-1.2.so.9...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libdirectfb-1.2.so.9 Reading symbols from /usr/lib/libfusion-1.2.so.9...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libfusion-1.2.so.9 Reading symbols from /usr/lib/libdirect-1.2.so.9...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libdirect-1.2.so.9 Reading symbols from /usr/lib/libvga.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libvga.so.1 Reading symbols from /lib/ld-linux.so.2...Reading symbols from /usr/lib/debug/lib/ld-2.11.2.so...done. done. Loaded symbols for /lib/ld-linux.so.2 Reading symbols from /usr/local/lib/libv4l2.so.0...done. Loaded symbols for /usr/local/lib/libv4l2.so.0 Reading symbols from /lib/libuuid.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libuuid.so.1 Reading symbols from /usr/lib/libxcb.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libxcb.so.1 Reading symbols from /usr/lib/libXmu.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXmu.so.6 Reading symbols from /usr/lib/libXpm.so.4...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXpm.so.4 Reading symbols from /usr/lib/libjpeg.so.62...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libjpeg.so.62 Reading symbols from /usr/lib/libmng.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libmng.so.1 Reading symbols from /usr/lib/libopenjpeg.so.2...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libopenjpeg.so.2 Reading symbols from /lib/libpng12.so.0...(no debugging symbols found)...done. Loaded symbols for /lib/libpng12.so.0 Reading symbols from /usr/lib/libIlmImf.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libIlmImf.so.6 Reading symbols from /usr/lib/libImath.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libImath.so.6 Reading symbols from /usr/lib/libHalf.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libHalf.so.6 Reading symbols from /usr/lib/libIex.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libIex.so.6 Reading symbols from /usr/lib/libIlmThread.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libIlmThread.so.6 Reading symbols from /lib/libx86.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libx86.so.1 Reading symbols from /usr/local/lib/libv4lconvert.so.0...done. Loaded symbols for /usr/local/lib/libv4lconvert.so.0 Reading symbols from /usr/lib/libXau.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXau.so.6 Reading symbols from /usr/lib/libXdmcp.so.6...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXdmcp.so.6 Reading symbols from /usr/lib/liblcms.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/liblcms.so.1 Reading symbols from /usr/local/lib/OGRE/RenderSystem_GL.so...done. Loaded symbols for /usr/local/lib/OGRE/RenderSystem_GL.so Reading symbols from /usr/lib/libGLU.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libGLU.so.1 Reading symbols from /usr/lib/libGL.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libGL.so.1 Reading symbols from /usr/lib/libXrandr.so.2...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXrandr.so.2 Reading symbols from /usr/lib/libGLcore.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libGLcore.so.1 Reading symbols from /usr/lib/tls/libnvidia-tls.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/tls/libnvidia-tls.so.1 Reading symbols from /usr/lib/libXrender.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXrender.so.1 Reading symbols from /usr/lib/libXcursor.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXcursor.so.1 Reading symbols from /usr/lib/libXfixes.so.3...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libXfixes.so.3 Reading symbols from /lib/i686/cmov/libnss_compat.so.2...Reading symbols from /usr/lib/debug/lib/i686/cmov/libnss_compat-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libnss_compat.so.2 Reading symbols from /lib/i686/cmov/libnsl.so.1...Reading symbols from /usr/lib/debug/lib/i686/cmov/libnsl-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libnsl.so.1 Reading symbols from /lib/i686/cmov/libnss_nis.so.2...Reading symbols from /usr/lib/debug/lib/i686/cmov/libnss_nis-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libnss_nis.so.2 Reading symbols from /lib/i686/cmov/libnss_files.so.2...Reading symbols from /usr/lib/debug/lib/i686/cmov/libnss_files-2.11.2.so...done. done. Loaded symbols for /lib/i686/cmov/libnss_files.so.2 Reading symbols from /usr/lib/alsa-lib/libasound_module_rate_speexrate.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib/alsa-lib/libasound_module_rate_speexrate.so Reading symbols from /usr/lib/sse2/libspeexdsp.so.1...(no debugging symbols found)...done. Loaded symbols for /usr/lib/sse2/libspeexdsp.so.1 Core was generated by `./OgreVisualizer'. Program terminated with signal 11, Segmentation fault. #0 0xb6dc563d in std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, std::_Rb_tree_node_base*, std::_Rb_tree_node_base&) () from /usr/lib/libstdc++.so.6 (gdb) bt #0 0xb6dc563d in std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, std::_Rb_tree_node_base*, std::_Rb_tree_node_base&) () from /usr/lib/libstdc++.so.6 #1 0xb73bb3c2 in std::_Rb_tree<Ogre::Node*, Ogre::Node*, std::_Identity<Ogre::Node*>, std::less<Ogre::Node*>, Ogre::STLAllocator<Ogre::Node*, Ogre::CategorisedAllocPolicy<(Ogre::MemoryCategory)0> > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, Ogre::Node* const&) () from /usr/local/lib/libOgreMain.so.1.7.1 #2 0xb73b5a52 in _M_insert_unique (this=0xb6157ea0, child=0xb616aff8, forceParentUpdate=false) at /usr/include/c++/4.4/bits/stl_tree.h:1182 #3 insert (this=0xb6157ea0, child=0xb616aff8, forceParentUpdate=false) at /usr/include/c++/4.4/bits/stl_set.h:411 #4 Ogre::Node::requestUpdate (this=0xb6157ea0, child=0xb616aff8, forceParentUpdate=false) at /home/balazsbela/Downloads/ogre_src_v1-7-1/OgreMain/src/OgreNode.cpp:805 #5 0xb73b6a40 in Ogre::Node::needUpdate (this=0xb616aff8, forceParentUpdate=92) at /home/balazsbela/Downloads/ogre_src_v1-7-1/OgreMain/src/OgreNode.cpp:789 #6 0xb73b5038 in Ogre::Node::setScale (this=0x1825c, scale=...) at /home/balazsbela/Downloads/ogre_src_v1-7-1/OgreMain/src/OgreNode.cpp:638 #7 0x0805d306 in VisualizerApplication::adjustNodes (this=0x9cd4808) at ../src/VisualizerApplication.cpp:236 #8 0xb6e867f0 in ?? () from /usr/lib/libSDL_mixer-1.2.so.0 #9 0xb6e8719a in ?? () from /usr/lib/libSDL_mixer-1.2.so.0 #10 0xb6ed9b0d in ?? () from /usr/lib/libSDL-1.2.so.0 #11 0xb6ee185e in ?? () from /usr/lib/libSDL-1.2.so.0 #12 0xb6f2e0bd in ?? () from /usr/lib/libSDL-1.2.so.0 #13 0xb6bc7955 in start_thread (arg=0xb198ab70) at pthread_create.c:300 #14 0xb6ca6e7e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:130 (gdb) Ogre.log: (http)://pastie.org/1581790

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi

    MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi Mise à jour du 13/12/10 Ce mercredi, Oracle organise un webinar pour présenter « une mise à jour importante de MySQL ». Tomas Ulin, Vice-Président du développement de MySQL et Rob Young, Senior Product Manager, y dévoileront les dernières avancées du SGBD open-source que le géant des bases de données à récupérée avec le rachat de Sun. Oracle avait annoncé une RC de MySQL 5,5 lors de l'Oracle OpenWorld de septembre (lire ci-avant). Cette fois-ci, les responsables du projets pourraient annoncer sa disponibilité officielle.

    Read the article

  • Les spécifications de Java SE 7 et 8 proposées par Oracle sont adoptées « à une écrasante majorité » par le Comité Exécutif du JCP

    Les spécifications de Java SE 7 et 8 proposées par Oracle sont adoptées « A une écrasante majorité » par le Comité Exécutif du JCP Mise à jour du 07/12/10 Oracle vient d'annoncer que ses propositions pour les spécifications des deux prochaines versions de la plate-forme Java (ses « Java Specification Requests » ou (JSR)) ont été adoptées après un vote du Comité Exécutif du JCP. Oracle précise que ce vote, concernant les JSR 336 et 337, s'est soldé par une « écrasante majorité » en sa faveur, sans autres précision sur les chiffres du scrutin. La feuille de route, désormais approuvée, prévoit une standardisation de...

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >