Search Results

Search found 6772 results on 271 pages for 'exit strategy'.

Page 40/271 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Execute SQL SP in Excel VBA

    - by TheOCD
    HI I am having problem with getting all the columns back when i execute following code in excel vba. I only get 6 out of 23 columns back. Connection, command etc works fine (i can see exec command in the SQL Profiler), data headers are created for all 23 columns but i only get data for 6 column. Side Note: it's not prod level code, have missed out error handling on purpose, sp works fine in SQL management studio, ASP.Net, C# win form app, it is for Excel 2003 connecting to SQL 2008. Can someone help me troubleshoot it? Dim connection As ADODB.connection Dim recordset As ADODB.recordset Dim command As ADODB.command Dim strProcName As String 'Stored Procedure name Dim strConn As String ' connection string. Dim selectedVal As String 'Set ADODB requirements Set connection = New ADODB.connection Set recordset = New ADODB.recordset Set command = New ADODB.command If Workbooks("Book2.xls").MultiUserEditing = True Then MsgBox "You do not have Exclusive access to the workbook at this time." & _ vbNewLine & "Please have all other users close the workbook and then try again.", vbOKOnly + vbExclamation Exit Sub Else On Error Resume Next ActiveWorkbook.ExclusiveAccess 'On Error GoTo No_Bugs End If 'set the active sheet Set oSht = Workbooks("Book2.xls").Sheets(1) 'get the connection string, if empty just exit strConn = ConnectionString() If strConn = "" Then Exit Sub End If ' selected value, if <NOTHING> just exit selectedVal = selectedValue() If selectedVal = "<NOTHING>" Then Exit Sub End If If Not oSht Is Nothing Then 'Open database connection connection.ConnectionString = strConn connection.Open ' set command stuff. command.ActiveConnection = connection command.CommandText = "GetAlbumByName" command.CommandType = adCmdStoredProc command.Parameters.Refresh command.Parameters(1).Value = selectedVal 'Execute stored procedure and return to a recordset Set recordset = command.Execute() If recordset.BOF = False And recordset.EOF = False Then Sheets("Sheet2").[A1].CopyFromRecordset recordset ' Create headers and copy data With Sheets("Sheet2") For Column = 0 To recordset.Fields.Count - 1 .Cells(1, Column + 1).Value = recordset.Fields(Column).Name Next .Range(.Cells(1, 1), .Cells(1, recordset.Fields.Count)).Font.Bold = True .Cells(2, 1).CopyFromRecordset recordset End With Else MsgBox "b4 BOF or after EOF.", vbOKOnly + vbExclamation End If 'Close database connection and clean up If CBool(recordset.State And adStateOpen) = True Then recordset.Close Set recordset = Nothing If CBool(connection.State And adStateOpen) = True Then connection.Close Set connection = Nothing Else MsgBox "oSheet2 is Nothing.", vbOKOnly + vbExclamation End If

    Read the article

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • Socket connection to a telnet-based server hangs on read

    - by mixwhit
    I'm trying to write a simple socket-based client in Python that will connect to a telnet server. I can test the server by telnetting to its port (5007), and entering text. It responds with a NAK (error) or an AK (success), sometimes accompanied by other text. Seems very simple. I wrote a client to connect and communicate with the server, but it hangs on the first attempt to read the response. The connection is successful. Queries like getsockname and getpeername are successful. The send command returns a value that equals the number of characters I'm sending, so it seems to be sending correctly. But in the end, it always hangs when I try to read the response. I've tried using both file-based objects like readline and write (via socket.makefile), as well as using send and recv. With the file object I tried making it with "rw" and reading and writing via that object, and later tried one object for "r" and another for "w" to separate them. None of these worked. I used a packet sniffer to watch what's going on. I'm not versed in all that I'm seeing, but during a telnet session I can see my typed text and the server's text coming back. During my Python socket connection, I can see my text going to the server, but packets back don't seem to have any text in them. Any ideas on what I'm doing wrong, or any strategies to try? Here's the code I'm using (in this case, it's with send and recv): #!/usr/bin/python host = "localhost" port = 5007 msg = "HELLO EMC 1 1" msg2 = "HELLO" import socket import sys try: skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM) except socket.error, e: print("Error creating socket: %s" % e) sys.exit(1) try: skt.connect((host,port)) except socket.gaierror, e: print("Address-related error connecting to server: %s" % e) sys.exit(1) except socket.error, e: print("Error connecting to socket: %s" % e) sys.exit(1) try: print(skt.send(msg)) print("SEND: %s" % msg) except socket.error, e: print("Error sending data: %s" % e) sys.exit(1) while 1: try: buf = skt.recv(1024) print("RECV: %s" % buf) except socket.error, e: print("Error receiving data: %s" % e) sys.exit(1) if not len(buf): break sys.stdout.write(buf)

    Read the article

  • SQL Server 2008 R2 Enterprise won't install on Windows 2008 R2 Enterprise

    - by Carlos Paulino
    I've been trying to install SQL Server on a new Windows Server 2008. I have tried everything but I haven't been able to narrow down the problem. When the installation fails I get " Exit code (Decimal): -2068643839". The problem with this is that according to Microsoft this is a generic error code. I follow their guide to look into the detail.txt inside C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\ But I can't find something that specifies the exact error. Any suggestions ? Thanks in advanced. I uploaded to detail.txt to http://www.megaupload.com/?d=0MV46SZH because it is to big to paste here. Below is the summary.txt ---------- Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2011-02-28 11:29:56 End time: 2011-02-28 11:34:45 Requested action: Install Machine Properties: Machine name: SA-SERVER Machine processor count: 8 OS version: Windows Server 2008 R2 OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: F:\x64\setup\ Installation edition: ENTERPRISE User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\SYSTEM AGTSVCPASSWORD: ***** AGTSVCSTARTUPTYPE: Manual ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: <empty> ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: <empty> ASSVCPASSWORD: ***** ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: <empty> ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Disabled CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini CUSOURCE: ENABLERANU: False ENU: True ERRORREPORTING: False FARMACCOUNT: <empty> FARMADMINPORT: 0 FARMPASSWORD: ***** FEATURES: SQLENGINE,BIDS,CONN,IS,BC,SDK,SSMS,ADV_SSMS,SNAC_SDK,OCS FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: <empty> FTSVCACCOUNT: <empty> FTSVCPASSWORD: ***** HELP: False IACCEPTSQLSERVERLICENSETERMS: False INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\ INSTALLSQLDATADIR: <empty> INSTANCEDIR: D:\SQLServer INSTANCEID: MSSQLSERVER INSTANCENAME: MSSQLSERVER ISSVCACCOUNT: NT AUTHORITY\SYSTEM ISSVCPASSWORD: ***** ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: ***** PCUSOURCE: PID: ***** QUIET: False QUIETSIMPLE: False ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: ***** RSSVCSTARTUPTYPE: Automatic SAPWD: ***** SECURITYMODE: SQL SQLBACKUPDIR: <empty> SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT AUTHORITY\SYSTEM SQLSVCPASSWORD: ***** SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SA-SERVER\Administrator SQLTEMPDBDIR: <empty> SQLTEMPDBLOGDIR: <empty> SQLUSERDBDIR: <empty> SQLUSERDBLOGDIR: <empty> SQMREPORTING: False TCPENABLED: 1 UIMODE: Normal X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Integration Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Connectivity Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Complete Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Backwards Compatibility Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Business Intelligence Development Studio Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Microsoft Sync Framework Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\SystemConfigurationCheck_Report.htm

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Mobile HCM: It’s not the future, it is right now

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director Product Strategy, Oracle I’ll bet you reached for your iPhone or Android or BlackBerry and took a quick look at email or Facebook or last night’s text messages before you even got out of bed this morning. Come on, admit it, it’s ok, you are among friends here. See, feel better now? But seriously, the incredible growth and near-ubiquity of increasingly powerful, capable, and for many of us, essential in our daily lives mobile devices has profoundly changed the way we communicate, consume information, socialize, and more and more, conduct business and get our work done. And if you doubt that profound change has happened, just think for a moment about the last time you misplaced your iPhone.  The shivers, the cold sweats, the panic... We have all been there. And indeed your personal experiences with mobile technology echoes throughout the world - here are a few data points to consider: Market research firm IDC estimates 1.8 billion mobile phones will be shipped in 2012. A recent Pew study reports 46% of Americans own a smartphone of some kind. And finally in the USA, ownership of tablets like the iPad has doubled from 10% to 19% in the last year. So truly for the Human Resources leader, the question is no longer, ‘Should HR explore ways to exploit mobile devices and their always-on nature to better support and empower the modern workforce?’, but rather ‘How can HR best take advantage of smartphone and tablet capability to provide information, enable transactions, and enhance decision making?’. Because even though moving HCM applications to mobile devices seems inherently logical given today’s fast-moving and mobile workforces, and its promise to deliver incredible value to the organization, HR leaders also have to consider many factors before devising their Mobile HCM strategy and embarking on mobile HR technology projects. Here are just some of the important considerations for HR leaders as you build your strategies and evaluate mobile HCM solutions: Does your organization provide mobile devices to the workforce today, and if so, will the current set of deployed devices have the necessary capability and ecosystems to support your mobile HCM initiatives? Will you allow workers to use or bring their own mobile devices, (commonly abbreviated as ‘BYOD’), and if so are your IT and Security organizations in agreement and capable of supporting that strategy? Do you know which workers need access to mobile HCM applications? Often mobile HCM capability flows down in an organization, with executives and other ‘road-warrior’ types having the most immediate needs, followed by field sales staff, project managers, and even potential job candidates. But just as an organization will have to spend time understanding ‘who’ should have access to mobile HCM technology, the ‘what’ of the way the solutions should be deployed to these groups will also vary. What works and makes sense for the executive, (company-wide dashboards and analytics on an iPad), might not be as relevant for a retail store manager, (employee schedules, location-level sales and inventory data, transaction approvals, etc.). With Oracle Fusion HCM, we are taking an approach to mobile HR that encompasses not just the mobile solution needs for the various types of worker, but also incorporates the fundamental attributes of great mobile applications - the ability to support end-to-end transactions, apps that respond with lightning-fast speed, with functions that are embedded in a worker’s daily activities, and features that can be mashed-up easily with other business areas like Finance and CRM. Finally, and perhaps most importantly for the Oracle Fusion HCM team, delivering mobile experiences that truly enhance, enable, and empower the mobile workforce, and deliver on the design mantras of the best-in-class consumer applications, continues to shape and drive design decisions. Mobile is no longer the future, it is right now, and the cutting-edge HR leader of today will need to consider how mobile fits her HCM technology strategy from here on out. You can learn more about our ideas and plans for Oracle Fusion HCM mobile solutions at https://fusiontap.oracle.com/.

    Read the article

  • Value Chain Planning in Las Vegas

    - by Paul Homchick
    Several Oracle Value Chain Planning experts will be presenting at the Mandalay Bay Convention Center in Las Vegas, for Collaborate 2010- April 18th- 22nd, 2010. We have five sessions as follows: Monday, April 19, 1:15 pm - 2:15 pm, Breakers H, Roger Goossens Oracle VCP Vice President Leveraging Oracle Value Chain Planning for Your Planning Business Transformation Monday, April 19, 3:45 pm - 4:45 pm, Breakers I, Scott Malcolm, Oracle VCP Development Complex Supply Chain Planning Made Easy: Introducing Oracle Rapid Planning Tuesday, April 20, 2:00 pm - 3:00 pm, Breakers I, John Bermudez, Oracle VCP Strategy Synchronize Your Financial and Operating Plans with Oracle Integrated Business Planning Wednesday, April 21, 10:30 am - 11:30 am, Breakers I, Vikash Goyal, Oracle VCP Strategy Oracle Demantra: What's New? Wednesday, April 21, 2:15 pm - 3:15 pm, Mandalay Bay Ballroom A, Roger Goossens Oracle VCP Vice President Value Chain Planning for JD Edwards EnterpriseOne We will also be in the demogrounds, so stop by to see the latest VCP innovations from Oracle and talk to our experts.

    Read the article

  • Reach for the Stars…Even if you Miss you’ll Land in the Cloud

    - by Kristin Rose
    “You make investment in the next generation of technology, while continuing to invest in your existing.” – Larry Ellison Last week’s Oracle Cloud and Oracle Platinum Services announcement highlighted some of the exciting ways in which Oracle made the switch from being an On-Premise Application provider to both an On-Premise and Cloud Application provider. The announcement was lead by Oracle CEO Larry Ellison, and Oracle President Mark Hurd. Together they announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Attendees also anxiously awaited Larry’s first tweet.Be sure to watch the webcast replay below to learn more about the new developments in Oracle's Cloud strategy, and game-changing advances in Oracle Support. Sending you Cloud Dreams and Twitter Wishes,The OPN Communications Team

    Read the article

  • We're Back: I'm Here

    - by Brian Dayton
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

  • Oracle Customer Hub - Directions, Roadmap and Customer Success

    - by Mala Narasimharajan
     By Gurinder Bahl With less than a week from OOW 2012, I would like to introduce you all to the core Oracle Customer MDM Strategy sessions. Fragmentation of customer data across disparate systems prohibits companies from achieving a complete and accurate view of their customers. Oracle Customer Hub provide a comprehensive set of services, utilities and applications to create and maintain a trusted master customer system of record across the enterprise. Customer Hub centralizes customer data from disparate systems across your enterprise into a master repository. Existing systems are integrated in real-time or via batch with the Hub, allowing you to leverage legacy platform investments while capitalizing on the benefits of a single customer identity. Don’t miss out on two sessions geared towards Oracle Customer Hub:   1) Attend session CON9747 - Turn Customer Data into an Enterprise Asset with Oracle Fusion Customer Hub Applications at Oracle Open World 2012 on Monday, Oct 1st, 10:45 AM - 11:45 AM @ Moscone West – 2008. Manouj Tahiliani, Sr. Director MDM Product Management will provide insight into the vision of Oracle Fusion Customer Hub solutions, and review the roadmap. You will discover how Fusion Customer MDM can help your enterprise improve data quality, create accurate and complete customer information,  manage governance and help create great customer experiences. You will also understand how to leverage data quality capabilities and create a sophisticated customer foundation within Oracle Fusion Applications. You will also hear Danette Patterson, Group Lead, Church Pension Group talk about how Oracle Fusion Customer Hub applications provide a modern, next-generation, multi-domain foundation for managing customer information in a private cloud. 2)  Don't miss session  CON9692 - Customer MDM is key to Strategic Business Success and Customer Experience Management at Oracle Open World 2012 on Wednesday, October 3rd 2012 from 3:30-4:30pm @ Westin San Francisco Metropolitan 1. JP Hurtado, Director, Customer Systems, will provide insight on how RCCL overcame challenges of data quality, guest recognition & centralized customer view to provide consolidated customer view to multiple reservation, CRM, marketing, service, sales, data warehouse and loyalty systems. You will learn how Royal Caribbean Cruise Lines (RCCL), which has over 30 million customer and maintain multiple brands, leveraged Oracle Customer Hub (Siebel UCM) as backbone to customer data management strategy for past 5 years. Gurinder Bahl from MDM Product Management will provide an update on Oracle Customer Hub strategy, what we have achieved since last Open World and our future plans for the Oracle Customer Hub. You will learn about Customer Hub Data Quality capabilities around data analysis, cleansing, matching, address validation as well as reporting and monitoring capabilities. The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, and Conference sessions. You can see an overview of MDM sessions here.  Looking forward to see you at Open World, the perfect opportunity to learn about cutting edge Oracle technologies. 

    Read the article

  • OpenWorld: Spotlight on Fusion CRM

    - by Tony Berk
    Oracle OpenWorld is less than 2 weeks away, so you need to start figuring out how you are going to maximize your week. I don't want to discourage you, but I'm pretty sure it is impossible to attend all 2000+ sessions. So you need to focus on what's important to you. Many of our CRM customers will be interested in Fusion CRM, since they have already started Fusion implementations or determining when to start. If that's you, or you are just looking for an overview of Fusion CRM, we've got you covered! Let's start at the top! For an overview of what is in Fusion CRM and where it is going, you should attend the general session and roadmap session: General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. There is also a General Session for all Fusion Applications providing insight into the current strategy of the full product line and a high-level roadmap for each product area: Oracle Fusion Applications—Overview, Strategy, and Roadmap (GEN9433) - Oct 1, 10:45AM. This session will be repeated on Oct 3, 10:15AM. Now, if you want to drill down into some more detail, there are a lot more sessions with Oracle product management and customers. I'll highlight a few, but suggest you review the Fusion CRM Focus On document, or the search in the Content Catalog or Session Builder.  Driving Sales Performance with Oracle Fusion CRM (CON9744) - Oct 3, 10:15AM. Demonstrates how sales executives can gain instant visibility into their business, deliver pervasive coaching to their reps, maximize their sales pipeline, and drive team alignment. The result is increased sales performance that enables sales executives to deliver more revenue without increasing their resources or expenses. Maximize Your Revenue Potential with Oracle Fusion CRM Sales Planning (CON9751) - Oct 2, 1:15PM. Learn how Oracle Fusion CRM helps companies intelligently optimize sales planning and manage sales performance including the ability to predict their future sales opportunities and use those predictions in conjunction with past sales data to optimally define their sales territories, sales quotas, and incentive compensation plans. Boost Marketing’s Contribution to Revenue with Oracle Fusion CRM Marketing (CON9746) - Oct 3, 11:45AM. Learn how Oracle Fusion CRM can help your organization integrate sales and marketing, using one CRM platform. See how Oracle Fusion CRM can help your organization learn where to invest its precious marketing dollars; drive more revenue with cross-channel marketing and prospecting capabilities, including and not limited to e-mail, Web, and social media; improve lead conversion with integrated lead management functionality; and do more with less by automating many manual tasks. Oracle Fusion CRM: Social Marketing (CON11559) - Oct 1, 3:15PM. Learn how Oracle’s acquisition of Collective Intellect, Vitrue, and Involver extends Oracle Fusion Marketing as a world-class social marketing solution. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15AM. Hear how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle's social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Of course, we recommend you hear from the current Fusion CRM customers too. So, don't miss Oracle Fusion Customer Relationship Management: Customer Adoption and Experiences (CON9415) on Oct 3 at 10:15AM for panel of customers discussing implementation experiences, best practices and benefits.  After listening to all of this great information, you are probably going to have questions. Well, the experts will be on hand to help answer your questions and plan how your organization can get going with Fusion CRM. Be sure to head down to the DEMOgrounds and CRM Pavilion in the Moscone West Exhibit Hall. And finally, there is the always popular Meet the Experts session focused on Fusion CRM (MTE9658) on Oct 2 at 5PM (pre-registration via Schedule Builder is recommended.) In addition, there are more sessions on Mobility, Extensibility, Incentive Compensation, Fusion Customer Hub and other key components of the Fusion Applications infrastructure, Oracle Cloud and much, much more! For a full list, utilize the Fusion CRM Focus On document and Content Catalog. Enjoy!

    Read the article

  • When should complexity be removed?

    - by ElGringoGrande
    Prematurely introducing complexity by implementing design patterns before they are needed is not good practice. But if you follow all (or even most of) the SOLID principles and use common design patterns you will introduce some complexity as features and requirements are added or changed to keep your design as maintainable and flexible as needed. However once that complexity is introduced and working like a champ when do you removed it? Example. I have an application written for a client. When originally created there where several ways to give raises to employees. I used the strategy pattern and factory to keep the whole process nice and clean. Over time certain raise methods where added or removed by the application owner. Time passes and new owner takes over. This new owner is hard nosed, keeps everything simple and only has one single way to give a raise. The complexity needed by the strategy pattern is no longer needed. If I where to code this from the requirements as they are now I would not introduce this extra complexity (but make sure I could introduce it with little or no work should the need arise). So do I remove the strategy implementation now? I don't think this new owner will ever change how raises are given. But the application itself has demonstrated that this could happen. Of course this is just one example in an application where a new owner takes over and has simplified many processes. I could remove dozens of classes, interfaces and factories and make the whole application much more simple. Note that the current implementation does works just fine and the owner is happy with it (and surprised and even happier that I was able to implement her changes so quickly because of the discussed complexity). I admit that a small part of this doubt is because it is highly likely the new owner isn't going to use me any longer. I don't really care that somebody else will take this over since it has not been a big income generator. But I do care about 2 (related) things I care a bit that the new maintainer will have to think a bit harder when trying to understand the code. Complexity is complexity and I don't want to anger the psycho maniac coming after me. But even more I worry about a competitor seeing this complexity and thinking I just implement design patterns to pad my hours on jobs. Then spreading this rumor to hurt my other business. (I have heard this mentioned.) So... In general should previously needed complexity be removed even though it works and there has been a historically demonstrated need for the complexity but you have no indication that it will be needed in the future? Even if the question above is generally answered "no" is it wise to remove this "un-needed" complexity if handing off the project to a competitor (or stranger)?

    Read the article

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Can't send commands via SSH to Juniper firewalls

    - by Massimo
    I have some Juniper SSG firewalls which I need to manage, and I'd like to be able to send commands to them from some monitoring scripts. I configured SSH access using public keys, and I'm able to automatically login to the firewalls. When I run SSH interactively, everything works fine: $ssh <firewall IP> FIREWALL-> <command> <command output> FIREWALL-> exit Connection to <firewall IP> closed. $ But when I try to run the command from the command line, it doesn't work: $ssh <firewall IP> <command> $ This, of course, works fine when sending a command to a remote Linux box: $ssh <linux box IP> <command> <command output> $ Why is this happening? What is the difference between running SSH interactively and specifying the command to run on the SSH command line? Update: It also works fine with a Cisco router. Only these Juniper firewalls seem to behave this way. From the debug output from SSH, it looks like the connection gets established correctly, but the Juniper box replies with an EOF when sending the command, while instead the Linux box replies with the actual command output: Linux: debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending command: uptime debug2: channel 0: request exec confirm 0 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel 0: rcvd adjust 131072 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 16:44:44 up 25 days, 1:06, 3 users, load average: 0.08, 0.02, 0.01 debug2: channel 0: rcvd eof debug2: channel 0: output open -> drain debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug2: channel 0: rcvd close debug2: channel 0: close_read debug2: channel 0: input open -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.1 seconds debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0 debug1: Exit status 0 Juniper: debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug1: Sending command: get system debug2: channel 0: request exec confirm 0 debug2: callback done debug2: channel 0: open confirm rwindow 2048 rmax 1024 debug2: channel 0: rcvd eof debug2: channel 0: output open -> drain debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug2: channel 0: rcvd close debug2: channel 0: close_read debug2: channel 0: input open -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.2 seconds debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0 debug1: Exit status 1

    Read the article

  • Customer Centricity: It's Not Easy, But Worth It

    - by tony.berk
    Defining customer centricity is relatively easy: focusing on the customer and their experiences and interactions with your company. Implementing a customer centric strategy is not so easy. We've highlighted customers who have focused on their customers and experienced great success including SJ, the Swedish rail operator, and Vopak, the world's largest provider of conditioned storage facilities for bulk liquids. In this interview with Stuart Lennie, President, Volvo IT, North America and VP, Volvo's Global Sales to Order Solutions Unit, we get the opportunity to learn from another company that is not just talking about the customer, but actually implementing the significant strategic shifts required to become customer centric. Volvo has developed a vision, a strategy and a methodology to keep existing customers by understanding what is important to them. To see other customer success stories, visit Siebel CRM Success. Click here, to learn more about Oracle's CRM products.

    Read the article

  • We're Back: I'm Here

    - by [email protected]
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • Tackling Security and Compliance Barriers with a Platform Approach to IDM: Featuring SuperValu

    - by Darin Pendergraft
    On October 25, 2012 ISACA and Oracle sponsored a webcast discussing how SUPERVALU has embraced the platform approach to IDM.  Scott Bonnell, Sr. Director of Product Management at Oracle, and Phil Black, Security Director for IAM at SUPERVALU discussed how a platform strategy could be used to formulate an upgrade plan for a large SUN IDM installation. See the webcast replay here: ISACA Webcast Replay (Requires Internet Explorer or Chrome) Some of the main points discussed in the webcast include: Getting support for an upgrade project by aligning with corporate initiatives How to leverage an existing IDM investment while planning for future growth How SUN and Oracle IDM architectures can be used in a coexistance strategy Advantages of a rationalized, modern, IDM Platform architecture ISACA Webcast Featuring SuperValu - Tackling Security and Compliance Barriers with a Platform Approach to Identity Management from OracleIDM  

    Read the article

  • Oracle OpenWorld - 3 Days and Counting!

    - by Theresa Hickman
    If you haven’t set your schedule for OpenWorld yet, here’s your chance to reserve a seat at some of the key Financial Management sessions. There’s over 120 sessions specific to our Financials audience that will not only focus on Oracle’s financial product lines, but will also discuss controls and compliance, as well as analytics, budgeting/planning, and financial reporting and the close process. For a complete list of sessions, view any of the Focus on Documents located on the OpenWorld site. Key Sessions: Day Time Session Location Monday 3:15 Oracle Fusion Financials: Overview, Strategy, Customer Experiences, and Roadmap Moscone West - 2003 Monday 3:15 Oracle Financials: Strategy, Update, and Roadmap Moscone West - 3006 Tuesday 11:45 General Session: What’s Next for Financial Management Solutions at Oracle? Moscone West - 3002/3004 Tuesday 1:15 Exploring Oracle Preventive Controls Governor’s Features Through Real-Life Examples Palace Hotel - Presidio Weds 10:15 Oracle Hyperion Enterprise Performance Management: A Bridge to Oracle Fusion Financials Palace Hotel - Concert Weds 1:15 Oracle Fusion Financials Coexistence with Oracle E-Business Suite Moscone West - 2011 Weds 3:30 McDonald’s Adopts Financial Analytics to Increase Business Performance Moscone West - 2011 Thursday 12:45 User Panel: Reducing Upgrade Errors and Effort While Improving Compliance Palace Hotel Palace Hotel - Presidio

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-31

    - by Bob Rhubart
    SOA Suite 11g Asynchronous Testing with soapUI | Greg Mally Greg Mally walks you through testing asynchronous web services with the free edition of soapUI. The Role of Oracle VM Server for SPARC in a Virtualization Strategy | Matthias Pfutzner Matthias Pfutzner's overview of hardware and software virtualization basics, and the role that Oracle VM Server for SPARC plays in a virtualization strategy. Cloud Computing: Oracle RDS on AWS - Connecting with DB tools | Tom Laszewski Cloud expert and author Tom Laszewski shares brief comments about the tools he used to connect two Oracle RDS instances in AWS. Keystore Wallet File – cwallet.sso – Zum Teufel! | Christian Screen "One of the items that trips up a FMW implementation, if only for mere minutes, is the cwallet.sso file," says Oracle ACE Christian Screen. In this short post he offers information to help you avoid landing on your face. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May Source: SoftwareQuotes.com

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • Mobile Web or Objective-C?

    Cameron Moll is worried about a future in which we’ll all write Objective-C for the iPhone OS instead of writing web standards for the mobile web.At one point in time, J2ME (now Java ME) and WAP were the starting points for a discussion on mobile strategy and the web. Then, for a brief period of time, you talked about HTML/CSS. Now, for a growing majority of mobile strategies that don’t require a global presence on widely varying devices, the discussion begins with iPhone.Emphasis mine. Strategy...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >