Search Results

Search found 3786 results on 152 pages for 'instances'.

Page 22/152 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Cron is running but not outputting data

    - by Youri
    I'm trying to make my Amazon EC2 instances stop and start by a crontab. EC2 Api tools is succesfully installed. Manually it works. The cron (which I put in with the command crontab -e): 10 * * * * ubuntu /usr/bin/ec2-stop-instances [instanceid] /tmp/ec2.log The file /tmp/ec2.log is created. When I use the command grep CRON /var/log/syslog I see the cron has actually run. I don't get any output in the /tmp/ec2.log file though. I have set all the amazon variables needed. Even if I on purpose create a wrong cron, like this: 10 * * * * ubuntu /usr/bin/ec2-stop-instancwweqes [instanceid] /tmp/ec2.log I get no output in the file. Shouldn't there be an error? I also tried not defining the user: 10 * * * * /usr/bin/ec2-stop-instances [instanceid] /tmp/ec2.log And direct command: 10 * * * * ubuntu ec2-stop-instances [instanceid] /tmp/ec2.log Can someone please help me. If I can somehow debug, I can get to the solution. Thanks in advance.

    Read the article

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • April 18: Learn about Oracle Hyperion Data Relationship Management

    - by Theresa Hickman
    Do you have multiple charts of accounts on different application instances? Would you like an easy way to synchronize your charts of accounts across instances? If you answered yes, then please join us in an informal reference call with Johnson Controls who were able to synchronize their charts of accounts across 5 HFM (Hyperion Financial Management) instances using Hyperion Data Relationship Management (DRM). Johnson Controls is a global technology and industrial leader with 162,000 employees, serving customers in more than 150 countries. This call will include a brief overview of Johnson Controls and their solution followed by a candid discussion and an open question and answer session. When: April 18, 2012 Time: 8:00 am PST Duration: 1 Hour Speaker: Raymond Chontos, HFM Application Manager Global Financial Systems Click here to register.

    Read the article

  • SSL setup with GoDaddy subdomains and EC2 servers

    - by Kevin
    We have two EC2 instances that are used to host various scripts. Our main page 'companyname.com' is hosted with GoDaddy but is unrelated to those EC2 instances. I need to setup SSL connections for the two EC2 microinstances, one running Linux AMI and the other running Windows Server. I purchased two single-domain Comodo certificates and am at the part to generate CSR's on the instances. I'm not sure what to put as "Server Name" on EC2. I would like each server to be accessible through a subdomain which I have forwarded on GoDaddy to the elastic IPs on EC2. For server name, do I use the elastic ip, the EC2 public dns, or the subdomain that I want? And which of these do I then place in my VirtualHosts file on Apache? The Windows instance is running IIS7 but the Apache box is priority.

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • Where should instantiated classes be stored?

    - by Eric C.
    I'm having a bit of a design dilemma here. I'm writing a library that consists of a bunch of template classes that are designed to be used as a base for creating content. For example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } The problem is, not only is the library providing the templates, it will also supply quite a few predefined templates which are instances of these template classes. The question is, where do I put these instances of the templates? The three solutions I've come up with so far are: 1) Provide serialized instances of the templates as files. On the one hand, this solution would keep the instances separated from the library itself, which is nice, but it would also potentially add complexity for the user. Even if we provided methods for loading/deserializing the files, they'd still have to deal with a bunch of files, and some kind of config file so the app knows where to look for those files. Plus, creating the template files would probably require a separate app, so if the user wanted to stick with the files method of storing templates, we'd have to provide some kind of app for creating the template files. Also, this requires external dependencies for testing the templates in the user's code. 2) Add readonly instances to the template class Example: public class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template PredefinedTemplate { get { Template templateInstance = new Template(); templateInstance.Name = "Some Name"; templateInstance.Description = "A description"; ... return templateInstance; } } public Template() { //constructor } public void DoSomething() { //does something } ... } This method would be convenient for users, as they would be able to access the predefined templates in code directly, and would be able to unit test code that used them. The drawback here is that the predefined templates pollute the Template type namespace with a bunch of extra stuff. I suppose I could put the predefined templates in a different namespace to get around this drawback. The only other problem with this approach is that I'd have to basically duplicate all the namespaces in the library in the predefined namespace (e.g. Templates.SubTemplates and Predefined.Templates.SubTemplates) which would be a pain, and would also make refactoring more difficult. 3) Make the templates abstract classes and make the predefined templates inherit from those classes. For example: public abstract class Template { public string Name {get; set;} public string Description {get; set;} public string Attribute1 {get; set;} public string Attribute2 {get; set;} public Template() { //constructor } public void DoSomething() { //does something } ... } and public class PredefinedTemplate : Template { public PredefinedTemplate() { this.Name = "Some Name"; this.Description = "A description"; this.Attribute1 = "Some Value"; ... } } This solution is pretty similar to #2, but it ends up creating a lot of classes that don't really do anything (none of our predefined templates are currently overriding behavior), and don't have any methods, so I'm not sure how good a practice this is. Has anyone else had any experience with something like this? Is there a best practice of some kind, or a different/better approach that I haven't thought of? I'm kind of banging my head against a wall trying to figure out the best way to go. Thanks!

    Read the article

  • BizTalk server problem

    - by WtFudgE
    Hi, we have a biztalk server (a virtual one (1!)...) at our company, and an sql server where the data is being kept. Now we have a lot of data traffic. I'm talking about hundred of thousands. So I'm actually not even sure if one server is pretty safe, but our company is not that easy to convince. Now recently we have a lot of problems. Allow me to situate in detail, so I'm not missing anything: Our server has 5 applications: One with 3 orchestrations, 12 send ports, 16 receive locations. One with 4 orchestrations, 32 send ports, 20 receive locations. One with 4 orchestrations, 24 send ports, 20 receive locations. One with 47 (yes 47) orchestrations, 37 send ports, 6 receive locations. One with common application with a couple of resources. Our problems have occured since we deployed the applications with the 47 orchestrations. A lot of these orchestrations use assign shapes which use c# code to do the mapping. This is because we use HL7 extensions and this is kind of special, so by using c# code & xpath it was a lot easier to do the mapping because a lot of these schema's look alike. The c# reads in XmlNodes received through xpath, and returns XmlNode which are then assigned again to biztalk messages. I'm not sure if this could be the cause, but I thought I'd mention it. The send and receive ports have a lot of different types: File, MQSeries, SQL, MLLP, FTP. Each of these types have a different host instances, to balance out the load. Our orchestrations use the BiztalkApplication host. On this server also a couple of scripts are running, mostly ftp upload scripts & also a zipper script, which zips files every half an hour in a daily zip and deletes the zip files after a month. We use this zipscript on our backup files (we backup a lot, backups are also on our server), we did this because the server had problems with sending files to a location where there were a lot (A LOT) of files, so after the files were reduced to zips it went better. Now the problems we are having recently are mainly two major problems: Our most important problem is the following. We kept a receive location with a lot of messages on a queue for testing. After we start this receive location which uses the 47 orchestrations, the running service instances start to sky rock. Ok, this is pretty normal. Let's say about 10000, and then we stop the receive location to see how biztalk handles these 10000 instances. Normally they would go down pretty fast, and it does sometimes, but after a while it starts to "throttle", meaning they just stop being processed and the service instances stay at the same number, for example in 30 seconds it goes down from 10000 to 4000 and then it stays at 4000 and it lowers very very very slowly, like 30 in 5minutes or something. So this means, that all the other service instances of the other applications are also stuck in here, and they are also not processed. We noticed that after restarting our host instances the instance number went down fast again. So we tried to selectively restart different host instances to locate the problem. We noticed that eventually restarting the file send/receive host instance would do the trick. So we thought file sends would be the problem. Concidering that we make a lot of backups. So we replaced the file type backups with mqseries backups. The same problem occured, and funny thing, restarting the file send/receive host still fixes the problem. No errors can be found in the event viewer either. A second problem we're having is. That sometimes at arround 6 am, all or a part of the host instances are being stopped. In the event viewer we noticed the following errors (these are more than one): The receive location "MdnBericht SQL" with URL "SQL://ZNACDBPEG/mdnd0001/" is shutting down. Details:"The error threshold has been exceeded. The receive location is shutting down.". The Messaging Engine failed to add a receive location "M2m Othello Export Start Bestand" with URL "\m2mservices\Othello_import$\DataFilter Start*.xml" to the adapter "FILE". Reason: "The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. ". The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. An attempt to connect to "BizTalkMsgBoxDb" SQL Server database on server "ZNACDBBTS" failed. Error: "Login failed for user ''. The user is not associated with a trusted SQL Server connection." It woould seem that there's a login failure at this time and that because of it other services are also experiencing problems, and eventually they are shut down. The thing is, our user is admin, and it's impossible that it's password is wrong "sometimes". We have concidering that the problem could be due to an infrastructure problem, but that's not really are department. I know it's a long post, but we're not sure anymore what to do. Would adding another server and balancing the load solve our problems? Is there a way to meassure our balance and know where to start splitting? What are normal numbers of load etc? I appreciate any answers because these issues are getting worse and we're also on a deadline. Thanks a lot for replies!

    Read the article

  • Lucene complex structure search

    - by archer
    Basically I do have pretty simple database that I'd like to index with Lucene. Domains are: // Person domain class Person { Set<Pair> keys; } // Pair domain class Pair { KeyItem keyItem; String value; } // KeyItem domain, name is unique field within the DB (!!) class KeyItem{ String name; } I've tens of millions of profiles and hundreds of millions of Pairs, however, since most of KeyItem's "name" fields duplicates, there are only few dozens KeyItem instances. Came up to that structure to save on KeyItem instances. Basically any Profile with any fields could be saved into that structure. Lets say we've profile with properties - name: Andrew Morton - eduction: University of New South Wales, - country: Australia, - occupation: Linux programmer. To store it, we'll have single Profile instance, 4 KeyItem instances: name, education,country and occupation, and 4 Pair instances with values: "Andrew Morton", "University of New South Wales", "Australia" and "Linux Programmer". All other profile will reference (all or some) same instances of KeyItem: name, education, country and occupation. My question is, how to index all of that so I can search for Profile for some particular values of KeyItem::name and Pair::value. Ideally I'd like that kind of query to work: name:Andrew* AND occupation:Linux* Should I create custom Indexer and Searcher? Or I could use standard ones and just map KeyItem and Pair as Lucene components somehow?

    Read the article

  • How to get the list of SqlInstances of a paricular machine

    - by Cute
    Can anyone tell me how to get remote Sqlserver instances using c# and SMO or any api? I have a remote server name "RemoteMC", which has 2 instances of sql server: "RemoteMc" and "RemoteMC\sqlexpress" I try to get the instances in code like this: Server srv=new Server("RemoteMC"); DataTable dt=SmoApplication.EnumAvailableSqlServer(true); But it returns "Host\sqlexpress" I don't know what went wrong. How can I get the result back as: RemoteMC RemoteMC\sqlexpress; ?

    Read the article

  • [numpy] storing record arrays in object arrays

    - by Peter Prettenhofer
    I'd like to convert a list of record arrays -- dtype is (uint32, float32) -- into a numpy array of dtype np.object: X = np.array(instances, dtype = np.object) where instances is a list of arrays with data type np.dtype([('f0', '<u4'), ('f1', '<f4')]). However, the above statement results in an array whose elements are also of type np.object: X[0] array([(67111L, 1.0), (104242L, 1.0)], dtype=object) Does anybody know why? The following statement should be equivalent to the above but gives the desired result: X = np.empty((len(instances),), dtype = np.object) X[:] = instances X[0] array([(67111L, 1.0), (104242L, 1.0), dtype=[('f0', '<u4'), ('f1', '<f4')]) thanks & best regards, peter

    Read the article

  • Asp.net Mvc - Kigg: Maintain User object in HttpContext.Items between requests.

    - by Pickels
    Hallo, first I want to say that I hope this doesn't look like I am lazy but I have some trouble understanding a piece of code from the following project. http://kigg.codeplex.com/ I was going through the source code and I noticed something that would be usefull for my own little project I am making. In their BaseController they have the following code: private static readonly Type CurrentUserKey = typeof(IUser); public IUser CurrentUser { get { if (!string.IsNullOrEmpty(CurrentUserName)) { IUser user = HttpContext.Items[CurrentUserKey] as IUser; if (user == null) { user = AccountRepository.FindByClaim(CurrentUserName); if (user != null) { HttpContext.Items[CurrentUserKey] = user; } } return user; } return null; } } This isn't an exact copy of the code I adjusted it a little to my needs. This part of the code I still understand. They store their IUser in HttpContext.Items. I guess they do it so that they don't have to call the database eachtime they need the User object. The part that I don't understand is how they maintain this object in between requests. If I understand correctly the HttpContext.Items is a per request cache storage. So after some more digging I found the following code. internal static IDictionary<UnityPerWebRequestLifetimeManager, object> GetInstances(HttpContextBase httpContext) { IDictionary<UnityPerWebRequestLifetimeManager, object> instances; if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { lock (httpContext.Items) { if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { instances = new Dictionary<UnityPerWebRequestLifetimeManager, object>(); httpContext.Items.Add(Key, instances); } } } return instances; } This is the part where some magic happens that I don't understand. I think they use Unity to do some dependency injection on each request? In my project I am using Ninject and I am wondering how I can get the same result. I guess InRequestScope in Ninject is the same as UnityPerWebRequestLifetimeManager? I am also wondering which class/method they are binding to which interface? Since the HttpContext.Items get destroyed each request how do they prevent losing their user object? Anyway it's kinda a long question so I am gradefull for any push in the right direction. Kind regards, Pickels

    Read the article

  • Simple "Hello World!" console application crashes when run by windows TaskScheduler (1.0)

    - by user326627
    I have a batch file which starts multiple instances of simple console application (Hello World!). I work on Windows server 2008 64-bit. I configure it to run in TaskScheduler, at startup, and whether user is logged-in or not. The later configuration means that the instances will run without GUI (i.e. - no window). When I run this task, some of the instances just fail, after consuming 100& CPU. Application event-log shows the following error: "Faulting module KERNEL32.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000142, fault offset 0x00000000000b8fb8, process id 0x29bc, application start time 0x01cae17d94a61895." Running the batch file directly works just fine. It seems to me that the OS has a problem loading too many instances of the application when no window is displayed. However - I can’t figure out why... Any idea??

    Read the article

  • a linq query to find something

    - by rap-uvic
    Hi, I have object A which contains multiple instances of object B, which in turn contains multiple instances of object C. I need to write a function which, given Object A needs search through instances of objects B and objects C and find a particular object C. How would I do this using LINQ?

    Read the article

  • What happens when I instantiate class in Python?

    - by Konstantin
    Could you clarify some ideas behind Python classes and class instances? Consider this: class A(): name = 'A' a = A() a.name = 'B' # point 1 (instance of class A is used here) print a.name print A.name prints: B A if instead in point 1 I use class name, output is different: A.name = 'B' # point 1 (updated, class A itself is used here) prints: B B Even if classes in Python were some kind of prototype for class instances, I'd expect already created instances to remain intact, i.e. output like this: A B Can you explain what is actually going on?

    Read the article

  • C#/XNA Giant Memory Leak

    - by user1440926
    this is my first post here. I'm making a game in Visual Studio 2010 using XNA, and i've hit a giant memory leak. My game starts out using 17k ram and then after ten minutes it's upto 65k. I ran some memory profilers, and they all say that new instances of the String object are being created, but they aren't live. The amount of live instances of String hasn't changed at all. It's also creating instances of Char[] (which i'd expect from it), Object[], and StringBuilder. My game is pretty new but there's too much code to post here. I have no idea how to get rid of instances that aren't live, please help!

    Read the article

  • Is an Object the smallest pageable unit in the Heap?

    - by DonnieKun
    Hi, If I have a 2 GB ram and I have an 2 instances of an Object which is 1.5 GB each, the operating system will help and context switch the pages to and from harddisk. What if I have 1 instances but is 3 GB. Can the same paging method breakdown this instances into 2 pages? Or will I encounter out-of-memory issue? Thanks.

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • Triple (3) Monitors under Linux

    - by widgisoft
    I have a 3 monitor setup (each 1680x1050) via an Nvidia NVS440 (2 GPUs, 2 outputs per GPU totalling 4 outputs); this works fine under Windows XP,7 but caused considerable headaches under Linux (Ubuntu 9.04). I had previously used an XFX 9600GT and the onboard XFX 9300GS to produce the same result but the card was noisy and power hungry and I was hoping that there was some magical switch in the NVS4400 that got rid of this annoying problem - turns out the NVS440 is just 2 cards on one physical PCB :-p (I searched the net high and low for people using this card under Linux but found nothing, if anything the card uses less power and is fan less so I was to benefit from it either way) Anyway, using either set up there were 5 solutions available: Have 3 separate X instances, all un joined Have 3 separate X instances, adjoined by Xinerama Have 2 separate X instances - One using twin-view, both adjoined by Xinerama Have 2 separate X instances - One using twin-view but no Xinerama Have a single Twin-view setup and leave the 3rd screen unplugged :-p The 4rd option, using 2 separate X instances and twinview (but no xinerama) was the best balance in terms of performance and usability but caused 2 really annoying issues You couldn't control (without altering the shortcuts) which screen an application opened onto - and once it was opened you couldn't move it to another screen without opening up terminal and forcing it to move Nvidia's overriding or falsifying of Xinerama breaks and the 2 screens joined by Twin view behave like a single huge screen causing popups to open in the middle of both screens and maximising of windows stretches to the width of the first 2 screens Firefox can only run one instance as the same user so having multiple firefox windows requires at least 2 users The second option "feels" like the right option, but OpenGL is basically disabled and playing any sort of game or even running anything graphical causes a huge performance drop and instability - even trying to run a basic emulator for gba or gens just causes the system to fall over. It works just enough to stare at your desktop and do nothing but as soon as you start doing some work - opening windows, dragging things around - running multiple copies of firefox it just really feels slow. The last open, only going dual screen works perfectly and everything performs as required, full GPU acceleration - two logical screen spaces - perfect, just make it work across GPUs like windows! :-p Anyway, I know RandR was supposed to pick up the slack when it would introduced GPU objects of sorts to allow multiple GPUs to be stitched together to create one huge desktop at a much deeper layer than Xinerama. I was wondering if this has now been fixed (I noticed X server 1.7 is out) and whether anyone has got it running successfully? Again, my requirements are: One huge desktop to drag any window across Maximising of windows to each screen (as XP does) Running fullscreen apps on the primary screen and disabling the mouse from moving onto the others or on all 3 stretched Finally as a side note; I am aware of the Matrox triple (and dual) head splitter but even the price they go for on eBay is more than I can afford atm, my argument: I shouldn't have to buy extra hardware to get something to work on Linux when it's something that's existed in the windows world for a long time (can you tell I don't get on with X :-p); If I had the cash I'd have bought the latest version of this box already (the new version finally supports large resolutions as the displays I have 1680x1050 each).

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >