Search Results

Search found 10067 results on 403 pages for 'pdp 11'.

Page 56/403 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Formula needed: Sort Array

    - by aw
    I have the following array: a = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16] I use it for some visual stuff like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Now I want to sort the array like this: 1 3 6 10 2 5 9 13 4 8 12 15 7 11 14 16 //So the original array should look like this: a = [1,5,2,9,6,3,13,10,7,4,14,11,8,15,12,16] Yeah, now I'm looking for a smart formula to do that ticker = 0; originalArray = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16] newArray = []; while(ticker < originalArray.length) { //do the magic here ticker++; }

    Read the article

  • what's the purpose of fcntl with parameter F_DUPFD

    - by Daniel
    I traced an oracle process, and find it first open a file /etc/netconfig as file handle 11, and then duplicate it as 256 by calling fcntl with parameter F_DUPFD, and then close the original file handle 11. Later it read using file handle 256. So what's the point to duplicate the file handle? Why not just work on the original file handle? 12931: 0.0006 open("/etc/netconfig", O_RDONLY|O_LARGEFILE) = 11 12931: 0.0002 fcntl(11, F_DUPFD, 0x00000100) = 256 12931: 0.0001 close(11) = 0 12931: 0.0002 read(256, " # p r a g m a i d e n".., 1024) = 1024 12931: 0.0003 read(256, " t s t p i _ c".., 1024) = 215 12931: 0.0002 read(256, 0x106957054, 1024) = 0 12931: 0.0001 lseek(256, 0, SEEK_SET) = 0 12931: 0.0002 read(256, " # p r a g m a i d e n".., 1024) = 1024 12931: 0.0003 read(256, " t s t p i _ c".., 1024) = 215 12931: 0.0003 read(256, 0x106957054, 1024) = 0 12931: 0.0001 close(256) = 0

    Read the article

  • Delete all but 5 newest entries in MySQL table

    - by manyxcxi
    I currently have PHP code that handles the logic for this because I do not know how to handle it in SQL. I want to create a stored procedure that will select all the elements in a table for a given run_id and delete all of them except for the 'newest' 5 entries (as noted by the stop_time column). CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `success_lines` int(11) default NULL, `error_lines` int(11) default NULL, `config_id` int(11) NOT NULL, `file_id` int(11) NOT NULL, `notes` text NOT NULL, `log_file` longblob, `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=128 DEFAULT CHARSET=utf8;

    Read the article

  • Online conversion to CSV using Perl

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in csv format online, i.e as soon the log file updated the CSV should also be updated. Is there any way to do this using perl.

    Read the article

  • php multidimensional array problem

    - by ntan
    Hi to all, i am trying to setup a multidimensional array but my problem is that i can not get the right order from incoming data. Explain $x[1][11]=11; $x[1]=1; var_dump($x); In the above code i get only x[1]. To right would be $x[1]=1; $x[1][11]=11; var_dump($x); But in my case i can dot ensure that x[1] will come first, and x[1][11] will come after. Is there any way that i can use the first example and get right the array. Keep in mind that the array depth is large. Thats

    Read the article

  • Getting Oracle Exception: ORA-1017: invalid username/password; logon denied

    - by Paks
    I have tried so many thing but i can't resolve that Error. I can connect with my username and password to: Database in SQLDeveloper, in SQL-Plus, in Server-Explorer (Visual Studio 2008) and all works fine. But if i Compile my Project i get that Error. Why is that? I tried to set case_sensitive to false, but the same error appears. I dont know what else to do. My Oracle version: Oracle Database 11g Express Edtiton Release 11.2.0.2.0 - Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for 32-bit Windows: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production

    Read the article

  • SQL SELECT using in() but excluding others.

    - by Pickledegg
    I have a table called 'countries' linked to another table 'networks' with a many to many relationship: countries countries_networks networks +-------------+----------+ +-------------+----------+ +-------------+---------------+ | Field | Type | | Field | Type | | Field | Type | +-------------+----------+ +-------------+----------+ +-------------+---------------+ | id | int(11) | | id | int(11) | | id | int(11) | | countryName | char(35) | | country_id | int(11) | | name | varchar(100) | +-------------+----------+ | network_id | int(11) | | description | varchar(255) | To retrieve all countries that have a network_id of 6 & 7, I just do the following: ( I could go further to use the networks.name but I know the countries_networks.network_id so i just use those to reduce SQL.) SELECT DISTINCT countryName FROM countries AS Country INNER JOIN countries_networks AS n ON Country.id = n.country_id WHERE n.network_id IN (6,7) This is fine, but I then want to retrieve the countries with a network_id of JUST 8, and no others. I'ver tried the following but its still returning networks with 6 & 7 in. Is it something to do with my JOIN? SELECT DISTINCT countryName FROM countries AS Country INNER JOIN countries_networks AS n ON Country.id = n.country_id WHERE n.network_id IN (8) AND n.network_id not IN(6,7) Thanks.

    Read the article

  • how to read csv file in jquery using codeigniter framework

    - by webghost
    suppose this is my csv file fileempId,lastName,firstName,middleName,street1,street2,city,state,zip,gender,birthDate,ssn,empStatus,joinDate,workStation,location,custom1,workState,salary,payFrequency,FITWStatus,FITWExemptions,DD1Routing,DD1Account,DD1Amount,DD1AmountCode,DD1Checking,DD2Routing,DD2Account,DD2Amount,DD2AmountCode,DD2Checking 1,Dela Cruz,Juano,Santos,,,,,,1,,,Part Time Internship,, asd Division, Makati,one, asd,150,Bi Weekly,Not Applicable,100,,,,,,1234,9876,100,SAVINGS,BLANK 3,Palogan,Ralph,,,,,,,1,11-Mar-11,,Full Time Contract,2-Mar-11, sdf Department, pasay,, ,,,Not Applicable,,,,,,,,,,, 5,San,Goku,,,,hidden leaf,,,1,11-Mar-11,,,,,,,,,,Not Applicable,0,,,,,,,,,, this is my form <label>Choose File:</label><font color="#FF0000">*</font> <input type="file" name="file" id="file" /> <input type="button" id="importButton" value="Import" name="importButton" /> how to read the data in csv and store it to mysql database(codeigniter)? Any example code on how to do it,.

    Read the article

  • What happens if a user jumps over 10 versions before updating, and every version had a new data mode

    - by dontWatchMyProfile
    Example: User installs app v.1.0, adds data. Then the dev submits 10 updates in 10 weeks. After 11 weeks, the user wants v.11.0 and grabs a copy from the app store. Assuming that the app has got 11 .xcdatamodel versions inside, where ***11.xcdatamodel is the current one, what would happen now since the persistent store of the user is ages old? would the migration happen 10 times, step-by-step through every migration iteration? Or does the actual migration of data (lets assume gigabytes of data) happen exactly once, after Core Data (or the persistent store coordinator) has figured out precisely what to do to go from v.1.0 to v.11.0?

    Read the article

  • Selecting distinct values from mysql with largest timestamp

    - by user987048
    I am building a mail system. The inbox is only supposed to grab the last message (one with the highest time value) of a concatenation of user and sender, where the user or sender is the user ID. Here is the table structure: CREATE TABLE IF NOT EXISTS `mail` ( `user` int(11) NOT NULL, `sender` int(11) NOT NULL, `body` text NOT NULL, `new` enum('0','1') NOT NULL default '1', `time` int(11) NOT NULL, KEY `user` (`user`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, with a table with the following data: user sender new time ***************************************** 1 0 0 5 1 0 0 6 2 1 0 7 1 0 1 8 1 2 0 9 1 0 1 11 1 2 1 12 I want to select the following: WHERE USER OR SENDER = X (in this case, 1) user sender new time ***************************************** 2 1 0 7 1 2 0 9 1 0 1 11 How would I go about doing something like this?

    Read the article

  • Oracle 11g R2 1???????~????????(Exadata??)?????

    - by Yusuke.Yamamoto
    ??2010?11?17???Oracle Database 11g Release2(R2) ???????1???? ????Oracle Database 11g R2 ?????????????????????????? ???? 2010/11/17:????? 2011/01/07:???????(Exadata/??) 2011/01/18:???????(Exadata/?????????????) 2011/02/22:???????(Exadata/?????:IT Leaders ????????) 2011/04/21:?????? 2011/04/21:???????(????????????) 2011/04/21:???????(Exadata/???????????????????????????????????) 2011/06/27:Oracle Exadata Database Machine ????1,000??? ?? Oracle Database 11g R2 ??????? Oracle Database 11g ?????????(????) ??????? Oracle Database 11g R2(???/????) Oracle Database 11g R2 ??????? ?? ??? 2009?11?11? Oracle Exadata Database Machine Version 2 ???? 2009?11?17? Oracle Database 11g R2 ???? 2010?02?01? ?????????????????????????????? 2010?03?31? SAP ? Oracle Database 11g R2 ??????????ISV????????·??????????? 2010?05?18? Windows Server 2008 R2 / Windows 7 ?????????Oracle Database 10g R2 ??? 2010?06?23? Oracle Application Express 4.0 ???? 2010?07?09? ?? Windows RDBMS ?????(2009?)????????? 2010?08?17? TPC-C Benchmark Price/Performance ???????? 2010?09?13? Patch Set 11.2.0.2 for Linux ????(??) 2010?10?20? Oracle Exadata Database Machine X2 ???? 2010?11?17? Oracle Database 11g R2 ????1?? 2010?11?19? ?? Windows RDBMS ?????(2010????)????????????? 2011?03?29? Oracle SQL Developer 3.0 ???? 2011?06?27? Oracle Exadata Database Machine ????1,000????????????????·?????????????? Oracle Database 11g ?????????(????) ????????????????????????????????(????)? ????(??????????) ??????????(???) ????? ????(???) ?????·???????·??? ????? ????·??????·?? ???? ???????(??????????????)|???99.999%???????500???????????? - ITpro ??????????? ????(????) ???(???) ????????(???) ??????(???????????) Oracle Exadata Database Machine ????? Oracle Database 11g ??(????)? ??????????????????????????????????? ????(??????) ????????????? ?????·???????·??? ??(??????????????) ?????(??????????) ?????????(????????) ?????????? ????(???????) ?????? ????/????·???????? ???????????(???????/NTT??????????) ????????????? ???? ???????????? ?????? ??? ?????|DWH?????????????? - IT Leaders(????????)|DWH?????????????? - IT Leaders ????(???????????) Customer Voice ????:????IT?????24??365????????????????????? ?Oracle9i Database ?????????????????????Oracle Database 11g ???????????????????????? Oracle9i Database ???????????????? Customer Voice ??????:Oracle Database 11g????????????????????? ?Oracle ASM ???????????????????I/O????????????????????????????????????? ??????? Oracle Database 11g R2(???/????) ???????????????? Oracle 11g R2 ????????? - IT Leaders ??????????11g R2?5???? - ??SE????Oracle??? - Think IT ????????????????????????~Oracle Database 11g Release2 ????????? - oracletech.jp ??????????? Oracle Database 11g Release 2(11gR2)|??????????? ???????|???????????

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • My frustum culling is culling from the wrong point

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] /= length; m_Planes[side][1] /= length; m_Planes[side][2] /= length; m_Planes[side][3] /= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } What am i doing wrong?

    Read the article

  • My frustum culling is culling from the wrong point [SOLVED]

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] *= length; m_Planes[side][1] *= length; m_Planes[side][2] *= length; m_Planes[side][3] *= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } Matrices look like (Camera is at (5, 0, 0)): ModelView [0,0,0.99,0] [0,1,0,0] [-0.99,0,0,0] [0,0,-5,1] Projection [0.814,0,0,0] [0,1.303,0,0] [0,0,-1,0] [0,0,-0.02,0] Clip [0,0,-1,-0.999] [0,1.30,0,0] [-0.814,0,0,0] [0,0,4.98,4.99] What am i doing wrong?

    Read the article

  • Is This Your Idea of Disaster Recovery?

    - by rickramsey
    Don't just make do with less. Protect what you've got. By, for instance, deploying Oracle Solaris 10 inside a zone cluster. "Wait," you say, "what is a zone cluster?" It is a zone deployed across different physical servers. "Who would do that!" you ask in a mild panic. Why, an upstanding sysadmin citizen interested in protecting his or her employer's investment with appropriate high availability and disaster recovery. If one server gets wiped out by Hurricane Sandy along with pretty much the entire East Coast of the USA, your zone continues to run on the other server(s). Provided you set them up in Edinburgh. This white paper (pdf) explains what a zone cluster is and how to use it. If a white paper reminds you of having to read War and Peace in school, just use this Oracle RAC and Solaris Cluster Cheat Sheet, instead. "But wait!" you exclaim. "I didn't realize Solaris 10 offered zone clusters!" I didn't, either! And in an earlier version of this blog post I said that zone clusters were only available with Oracle Solaris 11. But Karoly Vegh pointed me to the documentation for Oracle Solaris Cluster 3.3, which explains how to manage zone clusters in Oracle Solaris 10. Bite my fist! So, the point I was trying to make is not just that you can run Oracle Solaris 10 zone clusters, but that you can run them in an Oracle Solaris 11 environment. Now let's return to our conversation and pick up where we left off ... "Oh no! Whatever shall I do?" Fear not. Remember how Oracle Solaris 11 lets you create a Solaris 10 branded zone inside a system running Oracle Solaris 11? Well, the Solaris Cluster engineers thought that was a bang-up idea, and decided to extend Oracle Solaris Cluster so that you could run your Solaris 10 applications inside the protective cocoon of an Oracle Solaris 11 zone cluster. Take advantage of the installation improvements and network virtualization capabilities of Oracle Solaris 11 while still running your application on Oracle Solaris 10. You Luddite, you. That capability is in the latest release of Oracle Solaris Cluster, version 4.1, which became available last Friday. "Last Friday! Is it too late to get a copy?" You can still get a free copy from our download center (see below). And, if you'd like to know what other goodies the 4.1 release of Oracle Solaris Cluster provides, see: What's New In Oracle Solaris Cluster 4.1 (pdf) Free download Oracle Solaris Cluster 4.1 (SPARC or x86) Tech Article: How to Upgrade to Oracle Solaris Cluster 4.0, by Tim Read. As always, you can get the latest information about Oracle Solaris Cluster, plus technical how-to articles, documentation, and more from Oracle Solaris Cluster Resource Page for Sysadmins and Developers. And don't forget about the online launch of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1, scheduled for Nov 7. "I feel so much better, now!" Think nothing of it. That's what we're here for. - Rick Website Newsletter Facebook Twitter

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • PBCS Hyperion Planning in the Cloud PartnerLab 2-Day Training

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Objective of the PartnerLab:  To help partners engage the interest and commitment of their clients for Oracle Planning and Budgeting Cloud Service projects. This is your unique opportunity to learn how to expand your business with the PBCS Application. This 2-day PartnerLab workshop will enable your team to understand the fundamental concepts of the PBCS Application, the implications of Oracle Public Cloud deployment, and to effectively present and demonstrate PBCS to prospective clients. Participants must already be competent with the on-premise Hyperion Planning application: this training will build on existing expertise to cover SaaS Cloud specific deployment implications and how best to demonstrate this to clients and win services led PBCS implementation engagements. Register here now and see full Agenda for 07-08 July 2014 in Oracle Paris – Colombes 15, bd Charles de Gaulle, 92715 Colombes Cedex France Register here now and see full Agenda for 15-16 July 2014 in Oracle Italy via Fulvio Testi 136, Cinisello Balsamo, Milan, Italy This training is free of charge to OPN Member Partners This PartnerLab is a 2 day in-class workshop event led by Oracle Pre-Sales subject matter experts. These 2 days consist of discussions, presentations, demonstration and hands-on exercises. Note: the hands-on exercises are in an already installed environment that you can have access to after the event (see more @ Hyperion Demonstration Systems for Partners). The PartnerLab will be delivered in English or local language. Mandatory prerequisites for a participant: Please view material available and complete the assessments before you attend the PartnerLab event. Material and assessments cover foundational information about Oracle Hyperion Planning and Oracle Planning and Budgeting Cloud Service. View material prior to live PartnerLab: Oracle Hyperion Planning 11 Sales Specialist guided learning path Oracle Hyperion Planning 11 PreSales Specialist guided learning path Oracle Hyperion Planning 11 Implementation Specialist guided learning path Oracle Planning and Budgeting Cloud Service Specialist guided learning path PBCS How-to Videos Learn More at Oracle Planning and Budgeting Cloud Service Take and pass these on-line assessments prior to the live PartnerLab training: Oracle Hyperion Planning 11 Sales Specialist on-line exam Oracle Hyperion Planning 11 PreSales Specialist on-line exam /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Oracle 11g R2 1???????~?????????????

    - by Yusuke.Yamamoto
    ??2010?11?17???Oracle Database 11g Release2(R2) ???????1???? ????Oracle Database 11g R2 ?????????????????????????? ???98?????????1????????? ???????98?????????????????·?????·??????????! ???? 2010/11/17:????? 2011/01/07:???????(??) ?? Oracle Database 11g R2 ??????? Oracle Database 11g ?????????(????) ??????? Oracle Database 11g R2(???/????) Oracle Database 11g R2 ??????? ?? ??? 2009?11?11? Oracle Exadata Database Machine Version 2 ???? 2009?11?17? Oracle Database 11g R2 ???? 2010?02?01? ?????????????????????????????? 2010?03?31? SAP ? Oracle Database 11g R2 ??????????ISV????????·??????????? 2010?05?18? Windows Server 2008 R2 / Windows 7 ?????????Oracle Database 10g R2 ??? 2010?06?23? Oracle Application Express 4.0 ????(??) 2010?07?09? IDC Japan:2009???? Windows RDBMS ?????????????? 2010?08?17? TPC-C Benchmark Price/Performance ????????(??) 2010?09?13? Patch Set 11.2.0.2 for Linux ????(??) 2010?10?20? Oracle Exadata Database Machine X2 ???? 2010?11?17? Oracle Database 11g R2 ????1?? Oracle Database 11g ?????????(????) ????????????????????????????????(????)? ???? ????? ????(???) ?????·???????·??? ????? ??????????? ???? ??? ????????(???) ?????? Oracle Exadata Database Machine ????? Oracle Database 11g ??(????)? ???? ?????·???????·??? ?? ????????? ?????????? ???? ?????? ????(????·????????) ??????????? ???????????? ????? ???? Customer Voice ????:????IT?????24??365????????????????????? ?Oracle9i Database ?????????????????????Oracle Database 11g ???????????????????????? Oracle9i Database ???????????????? Customer Voice ??????:Oracle Database 11g????????????????????? ?Oracle ASM ???????????????????I/O????????????????????????????????????? ??????? Oracle Database 11g R2(???/????) ???????????????? Oracle 11g R2 ????????? - IT Leaders ??????????11g R2?5???? - ??SE????Oracle??? - Think IT ??????????? Oracle Database 11g Release 2(11gR2) ???????|???????????

    Read the article

  • Sorting and Re-arranging List of HashMaps

    - by HonorGod
    I have a List which is straight forward representation of a database table. I am trying to sort and apply some magic after the data is loaded into List of HashMaps. In my case this is the only hard and fast way of doing it becoz I have a rules engine that actually updates the values in the HashMap after several computations. Here is a sample data representation of the HashMap (List of HashMap) - {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=21, toDate=Tue Mar 23 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=456} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=20, toDate=Thu Apr 01 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 24 10:54:12 EDT 2010, eventId=22, toDate=Sat Mar 27 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Fri Mar 26 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 31 10:54:12 EDT 2010, actionId=1234} {fromDate=Mon Mar 15 10:54:12 EDT 2010, eventId=12, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=567} I am trying to achieve couple of things - 1) Sort the list by actionId and eventId after which the data would look like - {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=456} {fromDate=Mon Mar 15 10:54:12 EDT 2010, eventId=12, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=567} {fromDate=Wed Mar 24 10:54:12 EDT 2010, eventId=22, toDate=Sat Mar 27 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=21, toDate=Tue Mar 23 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=20, toDate=Thu Apr 01 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Fri Mar 26 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 31 10:54:12 EDT 2010, actionId=1234} 2) If we group the above list by actionId they would be resolved into 3 groups - actionId=1234, actionId=567 and actionId=456. Now here is my question - For each group having the same eventId, I need to update the records so that they have wider fromDate to toDate. Meaning, if you consider the last two rows they have same actionId = 1234 and same eventId = 11. Now we can to pick the least fromDate from those 2 records which is Wed Mar 17 10:54:12 and farther toDate which is Wed Mar 31 10:54:12 and update those 2 record's fromDate and toDate to Wed Mar 17 10:54:12 and Wed Mar 31 10:54:12 respectively. Any ideas? PS: I already have some pseudo code to start with. import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; import java.util.Comparator; import java.util.Date; import java.util.HashMap; import java.util.List; import org.apache.commons.lang.builder.CompareToBuilder; public class Tester { boolean ascending = true ; boolean sortInstrumentIdAsc = true ; boolean sortEventTypeIdAsc = true ; public static void main(String args[]) { Tester tester = new Tester() ; tester.printValues() ; } public void printValues () { List<HashMap<String,Object>> list = new ArrayList<HashMap<String,Object>>() ; HashMap<String,Object> map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(21)) ; map.put("fromDate", getDate(1) ) ; map.put("toDate", getDate(7) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(456)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate", getDate(1)) ; map.put("toDate", getDate(1) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(20)) ; map.put("fromDate", getDate(4) ) ; map.put("toDate", getDate(16) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(22)) ; map.put("fromDate",getDate(8) ) ; map.put("toDate", getDate(11)) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate",getDate(1) ) ; map.put("toDate", getDate(10) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate",getDate(4) ) ; map.put("toDate", getDate(15) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(567)) ; map.put("eventId", new Integer(12)) ; map.put("fromDate", getDate(-1) ) ; map.put("toDate",getDate(1)) ; list.add(map); System.out.println("\n Before Sorting \n "); for(int j = 0 ; j < list.size() ; j ++ ) System.out.println(list.get(j)); Collections.sort ( list , new HashMapComparator2 () ) ; System.out.println("\n After Sorting \n "); for(int j = 0 ; j < list.size() ; j ++ ) System.out.println(list.get(j)); } public static Date getDate(int days) { Calendar cal = Calendar.getInstance(); cal.setTime(new Date()); cal.add(Calendar.DATE, days); return cal.getTime() ; } public class HashMapComparator2 implements Comparator { public int compare ( Object object1 , Object object2 ) { if ( ascending == true ) { return new CompareToBuilder() .append(( ( HashMap ) object1 ).get ( "actionId" ), ( ( HashMap ) object2 ).get ( "actionId" )) .append(( ( HashMap ) object2 ).get ( "eventId" ), ( ( HashMap ) object1 ).get ( "eventId" )) .toComparison(); } else { return new CompareToBuilder() .append(( ( HashMap ) object2 ).get ( "actionId" ), ( ( HashMap ) object1 ).get ( "actionId" )) .append(( ( HashMap ) object2 ).get ( "eventId" ), ( ( HashMap ) object1 ).get ( "eventId" )) .toComparison(); } } } }

    Read the article

  • SOA PARTNER COMMUNITY NEWSLETTER JULY 2012

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTER JULY 2012 Dear SOA partner community member To provide our community members the best of our knowledge, we want your feedback on our SOA Partner community. Thus we are organizing SOA Partner Community Survey 2012. We request you to participate in the survey and give your valuable feedback on various areas of marketing, sales and education. To continue our successful BPM Suite, Oracle is launching together with you Process Accelerators initiative. It’s your opportunity to co-develop and market predefined processes. Oracle Fusion Applications Design Patterns are a great tool to develop your SOA or BPM solution or process accelerators. To promote your SOA & BPM Specialization we continue to offer several benefits. This month we would like to highlight our Specialization Plaques - make sure you request one for your office! Our Fusion Middleware Summer Camps are booked out, if could not get a seat you can attend the SOA & BPM track @ Virtual Developer Day: Oracle Fusion Development Oracle demo systems offer´s two new demos: Business Driven Development based on BPM Suite & SOA Lifecycle Management. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Community SurveyProcess Accelerators KitPlaques SOA & BPM SpecializedSOA & BPM at Virtual Developer Day News from our Partners & CommunityOverview of SOA Diagnostics in 11.1.1.6 Business driven development(BDD) demo now available! SOA Lifecycle Management Oracle Fusion applications design patterns Updated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum COMMUNITY SURVEY Like every year we would like to get your feedback in our SOA Partner Community Survey 2012. Make sure that You attend to further develop our community and support our planning! It is key for us to get your feedback to prepare for the next fiscal year. Back to top PROCESS ACCELERATORS KIT Oracle is very interested to co-develop and market with you, our partners, pre-defined processes for BPM Suite.I am very happy to announce a new program called “Oracle BPM Partner Solution Catalog”. This program will provide a one-stop shop for our customers looking for Oracle BPM partner solutions available in the market today.The Oracle BPM Solution Catalog will be hosted on our very popular Oracle Technology Network (OTN). To give you an idea of the scale of customer visibility, OTN today receives over 1Million hits per day from our business and developer community. We would like to invite you to list your Oracle BPM 11g solutions available today.In order to participate in this program, you need to do the following: Fill in the attached slide templates - #3 and #4 for each Oracle BPM 11g solution you would like to list on OTN.Please add links to whitepapers , videos, references to the specific solution in the template slide. We recommend that you create a landing page on your website for these linked artifacts and just point to the same from within the PowerPoint template. This will give you the flexibility to update the information as frequently as needed. If you have the particular solution in production or a reference available, please list them as well. Send the PowerPoint template slides (1 set of slides for each Oracle BPM solution) to [email protected]. In addition to having the opportunity to list your solutions on OTN for Oracle customers, you will have the chance to advertise your new wins/implementations/solutions in an Oracle Sponsored PM Webinar held every quarter. This program is targeted to go live by the end of summer 2012. At this point, we are targeting a soft launch in July end 2012 so send on your BPM solutions information as soon as possible. We would love to have your solution(s) listed in the “Oracle BPM Partner Solution Catalog” at the time of the launch. This will be a live repository so you can keep adding more solutions as they become available. If you have any questions, please feel free to contact us [email protected], Product Strategy Director, Oracle BPM , Phone +1 650.506.5486.Thank you and look forward to hearing from you. Oracle BPM team Process Accelerators Overview.pdf ProcessAcceleratorsDataSheet.pdf Demos draUPK.zip & trmUPK.zip BPM Solution repository slides.ppt Additional BPM material BPM Process Development Lifecycle Document that describes recommended approach to collaborative process modeling across business and IT tools ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) by Andrejus Baranovskis BPMN process editor problems in 11.1.1.6 by Mark Nelson BPM – Disable DBMS job to refresh B2B Materialized View by Mark Nelson For the complete kit please visit the BPM folder at our SOA Community Workspace (SOA Community membership required). For the complete presentation please visit our SOA Community Workspace (SOA Community membership required). Information is Oracle and Partner confidential! Back to top PLAQUES SOA & BPM SPECIALIZED We continue to offer you a nice SOA & BPM Specialization plaque with your logo to proof your success. If you are a SOA or BPM Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum Your shipping address Your Specialization: SOA or BPM We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website download Logo: Gold & Platinum or the BPM logos Gold & Platinum Back to top SOA & BPM AT VIRTUAL DEVELOPER DAY Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staffOracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect - this event has something for everyone. Don’t miss this opportunity.Tuesday, July 10, 2012. 9:00 a.m. PT -1:00 p.m. PT 11:00 a.m. CT - 3:00 p.m. CT 12:00 p.m. ET - 4:00 p.m. ET 1:00 p.m. BRT - 5:00 p.m. BRT Register online now! for this FREE event. Agenda: 09:00 am Opening 09:30 am Keynote: Oracle Fusion Development Track1Introduction to Fusion Development Track2What's New in Fusion Development Track3Fusion Development in the Enterprise 10:00 am Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 am Rich Web UI made simple - an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration *Hands On Lab – WebCenter and ADF Lab w/ JDeveloper - Lab materials will be provided ahead of the event to give you ample time to work through the lab and increase the productivity of the live chat sessions the day of the event. Sessions abstractsRegister online now! for this FREE event Read more on Community Events and post your comment here. Back to top NEWS FROM OUR PARTNERS AND COMMUNITY Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity JDeveloper & ADF?Troubleshooting BPMN process editor problems in 11.1.1.6http://dlvr.it/1p0FfS SOA Community?SOA & BPM @ Virtual Developer Day: Oracle Fusion Development - July 10th 2012https://soacommunity.wordpress.com/2012/07/02/soa-bpm-virtual-developer-day- oracle-fusion-developmentjuly-10th-2012/#soacommunity #soa #bom #education orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove.http://ow.ly/1kYqES SOA CommunitySOA Community Newsletter June 2012http://wp.me/p10C8u-qw SOA CommunityBPMN process editor problems in 11.1.1.6 by Mark Nelsonhttp://redstack.wordpress.com/2012/06/27/ bpmn-process-editor-problems-in-11-1-1-6 #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization)http://fb.me/1coX4r1X1 SOA CommunitySOA Learning Library provides a comprehensive curriculum for the SOA and BPM product suites https://soacommunity.wordpress.com/2012/06/27/soa-learning-library #soacommunity #soa #bpm OTNArchBeat ?A Universal JMX Client for Weblogic - Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koserhttp://pub.vitrue.com/mQVZ OTNArchBeat ?BPM - Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0Oracle SOA ?Learn how Choice Hotels Implements Innovative Google Maps Solution with #OracleSOA http://bit.ly/MTwIJ3 SOA Communitytop Tweets SOA Partner Community - June 2012 Send your tweets @soacommunity #soacommunity https://soacommunity.wordpress.com/2012/06/25/top-tweets-soa-partner-community-june-2012 Torsten Winterberg#OPITZ is pushing Oracle commitment to the next level: New Specializations done: ADF, BPM, WLS, Exadatahttp://bit.ly/KX1WVS ServiceTechSymposium ?Only 8 more days left until Super Early Bird Registration Discount expires! http://www.servicetechsymposium.com OracleBlogsSOA Management in 3 minutes - Video explainerhttp://ow.ly/1kN5pn SOA Community ?SOA, Cloud & Service Technology Symposium 2012 London - Enter Promo Code: Djmxz370https://soacommunity.wordpress.com/2012/06/22/soa-cloud-service-technology-symposium-2012-london #soasymposium #soacommunity #soa Heidi BuelowGreat course! w David Read RT @soacommunity: product management ADF for BPM training 5 seats left https://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity SOA Community ?product management ADF for BPM training 5 seats lefthttps://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity OTNArchBeat ?Oacle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broinhttp://pub.vitrue.com/UEiF OTNArchBeat ?SOA, Cloud & Service Technology Symposium 2012London - Special Oracle Discounthttp://pub.vitrue.com/8E0J SOA CommunityBecome a facebook fan of soacommunity http://www.facebook.com/soacommunity #soacommunity SOA Community ?SOA Suite HealthCare Integration Architecture https://blogs.oracle.com/SOAForHealthcare/entry/soa_suite_healthcare_integration_architecture #soacommunity #soa Andrejus Baranovskis ?Running Pre-built Virtual Machine for SOA Suite and BPM Suite 11g PS5 on Mac OS X Snow Leopard (10.6http://fb.me/vB8nO0Vg OracleBlogsPrinciples of Service-Oriented Architecture by Douwe P. van den Bos http://ow.ly/1kIcOP OTNArchBeatOracle Public Cloud Architecture | @TylerJewell http://ow.ly/bHAcL The SOA Network ?Business Process Management, Service-Oriented Architecture, and Web 2.0: Business Transformation or.http://bit.ly/LBgREL #ITNews #SOA OracleBlogs ?Oracle SOA Foundation Practitioner Certificationhttp://ow.ly/1kGYYg Frank Nimphius ?Learn Advanced ADF. ORACLE Fusion Middleware Summer Camps in Lisbon - July 9th - 13thhttp://bit.ly/KGCl3i SOA CommunityTransform Your Application Integration with Best Practices from Oracle Customershttps://blogs.oracle.com/SOA/entry/transform_your_application_integration_with #soacommunity #soa #bpm Simone GeibWhat you always wanted to know about #oraclesoa diagnostics: Shawn Bailey, Overview of SOA Diagnostics in 11.1.1.6,http://ow.ly/bxK0M Oracle SOA ?Save the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to #oraclesoa http://bit.ly/LsNDGl OTNArchBeat ?New VirtualBox images for Oracle SOA Suite & Oracle BPM Suite 11.1.1.6.0http://ow.ly/bwDAl OracleBlogs ?Process development lifecycle in Oracle BPM 11g http://ow.ly/1ktesY Daniel AmadeiNew post: Oracle BPEL 11g Message Delivery & Recovery.http://amadei.com.br/blog/index.php /oracle-bpel-11g-message-delivery SOA Community ?Sending out the June edition of the #soacommunity newsletter - read it or become a member http://www.oracle.com/goto/emea/soa!#soa #bpm Arun Pareek ?For the past six months Ahmed Aboulnaga and me have been working on Oracle SOA Suite 11g Administrator's Handbook.http://lnkd.in/CAvpUQ SOA CommunitySun shine all day no clouds - solar eclipse is over... #sunshine #cloud http://www.infoq.com/presentations/Swarm-Computing Michel SchildmeijerWatch my blog Oracle Service Bus 11g: listing projects and services with WLST - part 1 http://lnkd.in/B7f3GQ @TITAN_GS @wlscommunity OTNArchBeatBook Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Rahejahttp://ow.ly/bn2cc OTNArchBeat ?Driving from Business Architecture to Business Process Services | @vghariharan http://ow.ly/bn5UB OTNArchBeat ?SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 - Part II | Dawit Lessanu http://ow.ly/bn6sX Simone Geib ?Contact me directly for ideas how to improvehttp://bit.ly/advancedsoasuite and additional posts, presentations, white papers, ... #soasuite Simone Geib ?#soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite OracleBlogs ?June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA ServiceTechSymposium ?New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOeDebra Lilley ?looks good - real proof people are using the apps ! RT @fteter: Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps demed ?rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA CommunityMiddleware Oracle Excellence Awards 2012-HAPPY NEW YEAR! https://soacommunity.wordpress.com/2012/05/31/middleware-oracle-excellence-awards-2012happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle SOA CommunityHappy New Year #soacommunity thanks for the business! Time for a drink http://pic.twitter.com/zkK08KWB OTNArchBeat ?Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q SOA Communitytop Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposiumNew session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com/agenda2012.php #elastic_soa_in_the_cloud orclateamsoa ?A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11Ghttp://ow.ly/1k2cnl ServiceTechSymposium ?New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Linkhttp://www.servicetechsymposium.com/agenda2012.php#soa_governance_at_edp SOA Community ?VirtualBox image SOA Suite & BPM Suite 11.1.1.6.0&ndash;Your feedback?http://wp.me/p10C8u-qh Oracle MiddlewareSave the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to#oraclesoa http://bit.ly/LU1y5N OTNArchBeat ?Goodbye, Silos. Hello SOA. | @stephanieoverbyhttp://pub.vitrue.com/NJJO SOA CommunityBPM Standard Edition - to start your BPM project http://wp.me/p10C8u-qj Please feel free to send us your news! And add your blog to our SOA blog wiki. Back to top OVERVIEW OF SOA DIAGNOSTICS IN 11.1.1.6 What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections.In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. read the full blog post here. Read more on Oracle and post your comment here. Back to top BUSINESS DRIVEN DEVELOPMENT (BDD) DEMO NOW AVAILABLE! For access to the Oracle demo systems please visit OPN and talk to your Partner Expert DSS is pleased to announce the availability of the demo “Business Driven Development“. This innovative demonstration uses a case-study approach to show business users how they can easily streamline their Business Processes - delivering greater efficiency, agility, visibility and collaboration with Oracle BPM and WebCenter. The BDD demonstration uses a case study-based approach to highlight a business problem at a fictional company, Avitek Corporation, and uses Oracle BPM and Oracle WebCenter to solve the business problem. This holistic approach has specifically been used to appeal to a non-technical business analyst user. This demo is NOT focused on product features, but aims to guide users through a complete BPM lifecycle. The scenario is based on improving a simple order process (scenario details are in the demo script). Avitek Corporation is sufferinng from a manual email-driven ordering process. Sales reps don’t know where the customer orders are stuck (no visibility) and finance users are unable to manually approve every order (no automation). There are several areas where this process can be improved with Business Process Management technology. This demo shows how improving following areas will ignificantly help resolve the business problems Avitek Corporation is facing. Areas for improvement include: Utilizing BPM for process management, rather than an unregulated, email-based process. Utilizing automated services, rather than requiring a human to key into a system. For example, Finance checking the customer’s credit rating is something that could be automated. Centralizing business rules that can be integrated into a business process, rather than requiring a human to process them. For example, Finance must determine when orders can be automatically approved. Provide insight and visibility into the process. For example, Sales Rep needs to know the status of their customer’s orders. The BDD Demo uses the following products. Oracle BPM Suite 11g PS4FP Oracle WebCenter 11g PS4FP (for Process Spaces) Oracle Business Activity Monitoring 11g Oracle Database 11g Back to top SOA LIFECYCLE MANAGEMENT For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the SOA Management demo that showcases some of the key provisioning and lifecycle management capabilities of SOA Management Pack Enterprise Edition (EE). This demo specifically focuses on some of the lifecycle management solutions for Oracle SOA Suite and Oracle Service Bus (OSB). Demo Highlights The demo showcases the following capabilities. Provisioning of SOA Composites Provisioning of OSB Projects Provision SOA and OSB artifacts in a future maintenance window Back to top ORACLE FUSION APPLICATIONS DESIGN PATTERNS The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments! Read more on Fusionapps and post your comment here. Back to top UPDATED ORACLE MATERIAL Integrated SOA Gateway Documentation - Implementation Guide | Developer’s Guide Webcast Series: Oracle’s SOA and Oracle Business Process Management Solutions (Choice Hotels, Eaton, Farmers Insurance) BAM design pointers By Kavitha Srinivasan Seeking Oracle Fusion Middleware Go Live StoriesOracle Fusion Middleware product management is looking for recent go live stories to share with the Oracle sales team, sales consulting, product management and other internal groups. Customer contact details may remain anonymous. Your successful implementation will be featured in a quarterly report. The chance to present on an internal webcast is also available. Contact Maria Forney ([email protected]) if you have a noteworthy implementation success story. This is a good opportunity for partners interested in showcasing Oracle Fusion Middleware implementations, and gaining more exposure within Oracle. Performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources Back to top HAVE YOU MISSED OUR LAST SOA PARTNER COMMUNITY WEBCASTS? UPK Webcast Business Driven Application Management & BPM11g & Application Grid & GoldenGate & Fusion Middleware Pricing & OC4J to WebLogic & Next Generation SOA & Fusion Middleware in Utility & Fusion Middleware in Communications & Fusion Middleware in Public Services & Fusion Middleware in Financial Services Please check your local OPN trainings calendar for additional training dates and locations. Back to top SOA PARTNER COMMUNITY CALENDAR On-Demand Trainings Event Name Language Type SOA Virtual Developers Day English Tech In-Class Trainings Date Event name Location / Country Contact person Type 09-13.07.2012 BPM Suite 11g advanced training by David Read Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 ADF 11g advanced training by Grant Ronald and Frank Nimphius Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Lisbon, Portugal Jürgen Kress Tech 10.07.2012 Fusion Middleware Virtual Developer Day Online OTN Tech 10- 12.07.2012 WebLogic 12c training by Cosmin Tudor Lisbon, Portugal Jürgen Kress Tech 16-18.07.2012 SOA Suite 11g advanced training by Niall Commiskey Munich, Germany Jürgen Kress Tech 16-18.07.2012 ADF for BPM Suite 11g advanced training by David Read Munich, Germany Jürgen Kress Tech 16-18.07.2012 WebCenter Sites 11g advanced training by Product Management Munich, Germany Jürgen Kress Tech 17-20.07.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 23-26.07.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 29-31.08.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 02-05.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 28-30.11.2012 Oracle AIA 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 11-14.12.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 20-22.2.2013 Oracle AIA 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 14-17.1.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.3.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech Please check your local OPN Training Calendar for additional training and locations here. Back to top SOASCHOOL.COM - SOA CERTIFIED PROFESSIONAL(SOACP) PROGRAM The SOASchool.com - SOA Certified Professional (SOACP) program is dedicated to excellence in the field of SOA and service-oriented computing. Through a series of seasoned course modules and exams, IT professionals have the opportunity to obtain a number of different certifications to recognize their accomplishment of gaining "project ready" SOA proficiency. This comprehensive and strictly vendor-neutral program was developed in cooperation with best-selling SOA author Thomas Erl and several major SOA organizations and academic institutions. Through the involvement of the SOA Education Committee, course contents and certification requirements are constantly reviewed and revised to stay current with developments in the service-oriented computing industry. The program is currently comprised of 12 course modules and 5 certifications and is expanding to 18 course modules and 8 certifications throughout 2009. For more information, visit www.soaschool.com and www.soacp.com. Blog Twitter LinkedIn Mix Forum Wiki Back to top YOUR CONTENT ON THE NEWSLETTER AND ON THE SOA COMMUNITY PORTAL Publishing Your StoriesWe would like to invite our partners to publish information in the newsletter or on our SOA Community portal. Especially we are looking for your real life experience with our SOA technology. Please send your documents to Jürgen Kress. We look forward to getting your suggestions! Back to top SOA DISCUSSION FORUM BECOMES INTERACTIVE AT THE SOA COMMUNITY! Do you want to chat to experts, including partners and Oracle SOA Product Development? Do you want to get the latest information about our SOA solutions and events?Attend our private online SOA Discussion Forum at OTN. Please send your OTN forums user name to Brigitte Felisaz. You must be a registered user to access the SOA Discussion Forum. Back to top INVITE YOUR COLLEAGUES TO JOIN THE SOA COMMUNITY Please feel free to invite your colleagues to join the SOA Community and to participate in the SOA Assessment tests. For registration please login the Oracle PartnerNetwork and go to: www.oracle.com/goto/emea/soa For any questions on the above or concerning SOA and Oracle in general please contact the Oracle EMEA Alliances & Channels SOA Team. Best regardsOracle EMEA SOA TeamJürgen Kress Jürgen KressSOA Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • ORA-03113 in code. In addition, TNS-12535 and ORA-03137 in alert file

    - by user1348107
    I've got an exception that contain ORA-03113: (SiPPSS.GetPrintWorkDirectDetail) - ERR:ORA-03113: end-of-file on communication channel Process ID: 7448 Session ID: 30 Serial number: 9802 ?????:12110937 ????:T855 Oracle.DataAccess.Client.OracleException ORA-03113: end-of-file on communication channel Process ID: 7448 Session ID: 30 Serial number: 9802 ?? Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure, Boolean bCheck) ?? Oracle.DataAccess.Client.OracleException.HandleError(Int32 errCode, OracleConnection conn, String procedure, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, Boolean bCheck) ?? Oracle.DataAccess.Client.OracleCommand.ExecuteReader(Boolean requery, Boolean fillRequest, CommandBehavior behavior) ?? Oracle.DataAccess.Client.OracleDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) ?? System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) ?? SiPPSS.VSireiMeisaiDsTableAdapters.V_SIREI_MEISAITableAdapter.FillByRunningNoAndProcNo(V_SIREI_MEISAIDataTable dataTable, String RUNNING_NO, String PROC_NO) ?? C:\SVM\trunk\SiPPSSServer\Server\Dao\View\VSireiMeisaiDs.Designer.vb:? 386 ?? SiPPSS.GetPrintWorkDirectDetail.Execute(BLogicParam param) ?? C:\SVM\trunk\SiPPSSServer\Server\BLogic\Screen\Printing\Rprt0701\GetPrintWorkDirectDetail.vb:? 105 In this case, the oracle alert log as beblow: Fatal NI connect error 12170. VERSION INFORMATION: TNS for 64-bit Windows: Version 11.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production Time: 01-11?-2012 13:50:45 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS: ??????·???????? ns secondary err code: 12560 nt main err code: 505 TNS-00505: ??????????? nt secondary err code: 60 nt OS err code: 0 Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.41.102.53)(PORT=1794)) Thu Nov 01 13:54:17 2012 Thread 1 cannot allocate new log, sequence 1880 Private strand flush not complete Current log# 1 seq# 1879 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1880 (LGWR switch) Current log# 2 seq# 1880 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thu Nov 01 13:54:21 2012 Archived Log entry 1118 added for thread 1 sequence 1879 ID 0xe48db805 dest 1: Thu Nov 01 14:40:12 2012 Thread 1 cannot allocate new log, sequence 1881 Private strand flush not complete Current log# 2 seq# 1880 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1881 (LGWR switch) Current log# 3 seq# 1881 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thu Nov 01 14:40:16 2012 Archived Log entry 1119 added for thread 1 sequence 1880 ID 0xe48db805 dest 1: Thu Nov 01 15:27:42 2012 Thread 1 cannot allocate new log, sequence 1882 Private strand flush not complete Current log# 3 seq# 1881 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thread 1 advanced to log sequence 1882 (LGWR switch) Current log# 1 seq# 1882 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thu Nov 01 15:27:46 2012 Archived Log entry 1120 added for thread 1 sequence 1881 ID 0xe48db805 dest 1: Thu Nov 01 16:23:48 2012 Thread 1 cannot allocate new log, sequence 1883 Private strand flush not complete Current log# 1 seq# 1882 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1883 (LGWR switch) Current log# 2 seq# 1883 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thu Nov 01 16:23:52 2012 Archived Log entry 1121 added for thread 1 sequence 1882 ID 0xe48db805 dest 1: Thu Nov 01 17:05:50 2012 Thread 1 cannot allocate new log, sequence 1884 Private strand flush not complete Current log# 2 seq# 1883 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1884 (LGWR switch) Current log# 3 seq# 1884 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thu Nov 01 17:05:55 2012 Archived Log entry 1122 added for thread 1 sequence 1883 ID 0xe48db805 dest 1: Thu Nov 01 17:26:52 2012 Fatal NI connect error 12170. VERSION INFORMATION: TNS for 64-bit Windows: Version 11.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production Time: 01-11?-2012 17:26:52 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS: ??????·???????? ns secondary err code: 12560 nt main err code: 505 TNS-00505: ??????????? nt secondary err code: 60 nt OS err code: 0 Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.41.102.62)(PORT=1286)) Thu Nov 01 17:27:16 2012 Fatal NI connect error 12170. VERSION INFORMATION: TNS for 64-bit Windows: Version 11.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production Time: 01-11?-2012 17:27:16 Tracing not turned on. Tns error struct: ns main err code: 12535 TNS-12535: TNS: ??????·???????? ns secondary err code: 12560 nt main err code: 505 TNS-00505: ??????????? nt secondary err code: 60 nt OS err code: 0 Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.41.102.62)(PORT=1285)) Thu Nov 01 18:08:39 2012 Thread 1 advanced to log sequence 1885 (LGWR switch) Current log# 1 seq# 1885 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thu Nov 01 18:08:40 2012 Archived Log entry 1123 added for thread 1 sequence 1884 ID 0xe48db805 dest 1: Thu Nov 01 19:33:21 2012 Thread 1 cannot allocate new log, sequence 1886 Private strand flush not complete Current log# 1 seq# 1885 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1886 (LGWR switch) Current log# 2 seq# 1886 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thu Nov 01 19:33:25 2012 Archived Log entry 1124 added for thread 1 sequence 1885 ID 0xe48db805 dest 1: Thu Nov 01 20:32:25 2012 Thread 1 cannot allocate new log, sequence 1887 Private strand flush not complete Current log# 2 seq# 1886 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1887 (LGWR switch) Current log# 3 seq# 1887 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thu Nov 01 20:32:29 2012 Archived Log entry 1125 added for thread 1 sequence 1886 ID 0xe48db805 dest 1: Thu Nov 01 21:13:07 2012 Thread 1 advanced to log sequence 1888 (LGWR switch) Current log# 1 seq# 1888 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thu Nov 01 21:13:08 2012 Archived Log entry 1126 added for thread 1 sequence 1887 ID 0xe48db805 dest 1: Thu Nov 01 22:00:00 2012 Setting Resource Manager plan SCHEDULER[0x3006]:DEFAULT_MAINTENANCE_PLAN via scheduler window Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter Thu Nov 01 22:00:00 2012 Starting background process VKRM Thu Nov 01 22:00:00 2012 VKRM started with pid=32, OS id=4048 Thu Nov 01 22:00:59 2012 Thread 1 cannot allocate new log, sequence 1889 Private strand flush not complete Current log# 1 seq# 1888 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1889 (LGWR switch) Current log# 2 seq# 1889 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thu Nov 01 22:01:03 2012 Archived Log entry 1127 added for thread 1 sequence 1888 ID 0xe48db805 dest 1: Thu Nov 01 22:32:36 2012 Thread 1 advanced to log sequence 1890 (LGWR switch) Current log# 3 seq# 1890 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thu Nov 01 22:32:37 2012 Archived Log entry 1128 added for thread 1 sequence 1889 ID 0xe48db805 dest 1: Thu Nov 01 22:33:18 2012 Errors in file d:\oracle\diag\rdbms\siporex\siporex\trace\siporex_ora_11884.trc (incident=101313): ORA-03137: TTC protocol internal error : [12333] [8] [49] [50] [] [] [] [] Incident details in: d:\oracle\diag\rdbms\siporex\siporex\incident\incdir_101313\siporex_ora_11884_i101313.trc Thu Nov 01 22:33:21 2012 Trace dumping is performing id=[cdmp_20121101223321] Thu Nov 01 22:40:43 2012 Thread 1 cannot allocate new log, sequence 1891 Private strand flush not complete Current log# 3 seq# 1890 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thread 1 advanced to log sequence 1891 (LGWR switch) Current log# 1 seq# 1891 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thu Nov 01 22:40:47 2012 Archived Log entry 1129 added for thread 1 sequence 1890 ID 0xe48db805 dest 1: Thu Nov 01 23:47:30 2012 Thread 1 cannot allocate new log, sequence 1892 Private strand flush not complete Current log# 1 seq# 1891 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1892 (LGWR switch) Current log# 2 seq# 1892 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thu Nov 01 23:47:34 2012 Archived Log entry 1130 added for thread 1 sequence 1891 ID 0xe48db805 dest 1: Fri Nov 02 00:49:31 2012 Thread 1 cannot allocate new log, sequence 1893 Private strand flush not complete Current log# 2 seq# 1892 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1893 (LGWR switch) Current log# 3 seq# 1893 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Fri Nov 02 00:49:35 2012 Archived Log entry 1131 added for thread 1 sequence 1892 ID 0xe48db805 dest 1: Fri Nov 02 01:43:12 2012 Thread 1 cannot allocate new log, sequence 1894 Private strand flush not complete Current log# 3 seq# 1893 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thread 1 advanced to log sequence 1894 (LGWR switch) Current log# 1 seq# 1894 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Fri Nov 02 01:43:17 2012 Archived Log entry 1132 added for thread 1 sequence 1893 ID 0xe48db805 dest 1: Fri Nov 02 01:52:51 2012 Errors in file d:\oracle\diag\rdbms\siporex\siporex\trace\siporex_ora_6124.trc (incident=101273): ORA-03137: TTC protocol internal error : [12333] [4] [80] [82] [] [] [] [] Incident details in: d:\oracle\diag\rdbms\siporex\siporex\incident\incdir_101273\siporex_ora_6124_i101273.trc Fri Nov 02 01:52:54 2012 Trace dumping is performing id=[cdmp_20121102015254] Fri Nov 02 02:00:00 2012 Clearing Resource Manager plan via parameter Fri Nov 02 02:43:37 2012 Thread 1 cannot allocate new log, sequence 1895 Private strand flush not complete Current log# 1 seq# 1894 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1895 (LGWR switch) Current log# 2 seq# 1895 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Fri Nov 02 02:43:41 2012 Archived Log entry 1133 added for thread 1 sequence 1894 ID 0xe48db805 dest 1: Fri Nov 02 04:46:18 2012 Thread 1 cannot allocate new log, sequence 1896 Private strand flush not complete Current log# 2 seq# 1895 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1896 (LGWR switch) Current log# 3 seq# 1896 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Fri Nov 02 04:46:22 2012 Archived Log entry 1134 added for thread 1 sequence 1895 ID 0xe48db805 dest 1: Fri Nov 02 04:51:41 2012 Errors in file d:\oracle\diag\rdbms\siporex\siporex\trace\siporex_ora_4048.trc (incident=101425): ORA-03137: TTC protocol internal error : [12333] [4] [67] [85] [] [] [] [] Incident details in: d:\oracle\diag\rdbms\siporex\siporex\incident\incdir_101425\siporex_ora_4048_i101425.trc Fri Nov 02 04:51:44 2012 Trace dumping is performing id=[cdmp_20121102045144] Fri Nov 02 05:54:44 2012 Thread 1 cannot allocate new log, sequence 1897 Private strand flush not complete Current log# 3 seq# 1896 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thread 1 advanced to log sequence 1897 (LGWR switch) Current log# 1 seq# 1897 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Fri Nov 02 05:54:48 2012 Archived Log entry 1135 added for thread 1 sequence 1896 ID 0xe48db805 dest 1: Fri Nov 02 07:00:34 2012 Thread 1 cannot allocate new log, sequence 1898 Private strand flush not complete Current log# 1 seq# 1897 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1898 (LGWR switch) Current log# 2 seq# 1898 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Fri Nov 02 07:00:38 2012 Archived Log entry 1136 added for thread 1 sequence 1897 ID 0xe48db805 dest 1: Fri Nov 02 08:32:41 2012 Thread 1 cannot allocate new log, sequence 1899 Private strand flush not complete Current log# 2 seq# 1898 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1899 (LGWR switch) Current log# 3 seq# 1899 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Fri Nov 02 08:32:45 2012 Archived Log entry 1137 added for thread 1 sequence 1898 ID 0xe48db805 dest 1: Fri Nov 02 09:48:57 2012 Thread 1 advanced to log sequence 1900 (LGWR switch) Current log# 1 seq# 1900 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Fri Nov 02 09:48:58 2012 Archived Log entry 1138 added for thread 1 sequence 1899 ID 0xe48db805 dest 1: Fri Nov 02 10:18:15 2012 Thread 1 cannot allocate new log, sequence 1901 Private strand flush not complete Current log# 1 seq# 1900 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1901 (LGWR switch) Current log# 2 seq# 1901 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Fri Nov 02 10:18:19 2012 Archived Log entry 1139 added for thread 1 sequence 1900 ID 0xe48db805 dest 1: Fri Nov 02 10:22:58 2012 Thread 1 cannot allocate new log, sequence 1902 Private strand flush not complete Current log# 2 seq# 1901 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Thread 1 advanced to log sequence 1902 (LGWR switch) Current log# 3 seq# 1902 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Fri Nov 02 10:23:02 2012 Archived Log entry 1140 added for thread 1 sequence 1901 ID 0xe48db805 dest 1: Fri Nov 02 10:27:38 2012 Thread 1 cannot allocate new log, sequence 1903 Checkpoint not complete Current log# 3 seq# 1902 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thread 1 cannot allocate new log, sequence 1903 Private strand flush not complete Current log# 3 seq# 1902 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO03.LOG Thread 1 advanced to log sequence 1903 (LGWR switch) Current log# 1 seq# 1903 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Fri Nov 02 10:27:45 2012 Archived Log entry 1141 added for thread 1 sequence 1902 ID 0xe48db805 dest 1: Fri Nov 02 10:32:27 2012 Thread 1 cannot allocate new log, sequence 1904 Checkpoint not complete Current log# 1 seq# 1903 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 cannot allocate new log, sequence 1904 Private strand flush not complete Current log# 1 seq# 1903 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO01.LOG Thread 1 advanced to log sequence 1904 (LGWR switch) Current log# 2 seq# 1904 mem# 0: D:\ORACLE\ORADATA\SIPOREX\REDO02.LOG Fri Nov 02 10:32:34 2012 Archived Log entry 1142 added for thread 1 sequence 1903 ID 0xe48db805 dest 1: Fri Nov 02 10:35:42 2012 Errors in file d:\oracle\diag\rdbms\siporex\siporex\trace\siporex_ora_15856.trc (incident=101353): ORA-03137: TTC protocol internal error : [12333] [8] [49] [50] [] [] [] [] Incident details in: d:\oracle\diag\rdbms\siporex\siporex\incident\incdir_101353\siporex_ora_15856_i101353.trc Fri Nov 02 10:35:44 2012 Trace dumping is performing id=[cdmp_20121102103544] I don't know main reason of this issue as well as how to fixing it. Please help me.

    Read the article

  • BASH if conditions

    - by Daniil
    Hi, I did ask a question before. The answer made sense, but I could never get it to work. And now I gotta get it working. But I cannot figure out BASH's if statements. What am I doing wrong below: START_TIME=9 STOP_TIME=17 HOUR=$((`date +"%k"`)) if [[ "$HOUR" -ge "9" ]] && [[ "$HOUR" -le "17" ]] && [[ "$2" != "-force" ]] ; then echo "Cannot run this script without -force at this time" exit 1 fi The idea is that I don't want this script to continue executing, unless forced to, during hours of 9am to 5pm. But it will always evaluate the condition to true and thus won't allow me to run the script. ./script.sh [action] (-force) Thx Edit: The output of set -x: $ ./test2.sh restart + START_TIME=9 + STOP_TIME=17 ++ date +%k + HOUR=11 + [[ 11 -ge 9 ]] + [[ 11 -le 17 ]] + [[ '' != \-\f\o\r\c\e ]] + echo 'Cannot run this script without -force at this time' Cannot run this script without -force at this time + exit 1 and then with -force $ ./test2.sh restart -force + START_TIME=9 + STOP_TIME=17 ++ date +%k + HOUR=11 + [[ 11 -ge 9 ]] + [[ 11 -le 17 ]] + [[ '' != \-\f\o\r\c\e ]] + echo 'Cannot run this script without -force at this time' Cannot run this script without -force at this time + exit 1

    Read the article

  • Need help to create database schema for wholesale online tee store

    - by techiepark
    Hi, I'm currently working on wholesale online t-shirt shop. I have done this for fixed quantity and price, and its working fine. Now i need to do this for variable quantity and price. Here is the reference link, like what i have to do. Basic tables i have created are - CREATE TABLE attribute ( attribute_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, PRIMARY KEY (attribute_id) ); CREATE TABLE attribute_value ( attribute_value_id int(11) NOT NULL auto_increment, attribute_id int(11) NOT NULL, value varchar(100) NOT NULL, PRIMARY KEY (attribute_value_id), KEY idx_attribute_value_attribute_id (attribute_id) ); CREATE TABLE product ( product_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, description varchar(1000) NOT NULL, price decimal(10,2) NOT NULL, image varchar(150) default NULL, thumbnail varchar(150) default NULL, PRIMARY KEY (product_id), FULLTEXT KEY idx_ft_product_name_description (name,description) ); CREATE TABLE product_attribute ( product_id int(11) NOT NULL, attribute_value_id int(11) NOT NULL, PRIMARY KEY (product_id,attribute_value_id) ); I'm not getting how to store the price based on variable quantity. Please help me to create product and its related tables. my requirement is same as above reference link.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >