Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 45/992 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". PLease suggest.

    Read the article

  • JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process

    - by John-Brown.Evans
    JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process ol{margin:0;padding:0} .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} .c4_7{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c3_7{vertical-align:top;width:234pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_7{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c16_7{background-color:#ffffff;padding:0pt 0pt 0pt 0pt} .c0_7{height:11pt;direction:ltr} .c9_7{color:#1155cc;text-decoration:underline} .c17_7{color:inherit;text-decoration:inherit} .c5_7{direction:ltr} .c18_7{background-color:#ffff00} .c2_7{background-color:#f3f3f3} .c14_7{height:0pt} .c8_7{text-indent:36pt} .c11_7{text-align:center} .c7_7{font-style:italic} .c1_7{font-family:"Courier New"} .c13_7{line-height:1.0} .c15_7{border-collapse:collapse} .c12_7{font-weight:bold} .c10_7{font-size:8pt} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes This example demonstrates how to write a simple message to an Oracle AQ via the the WebLogic AQ JMS functionality from a BPEL process and a JMS adapter. If you have not yet reviewed the previous posts, please do so first, especially the JMS Step 6 post, as this one references objects created there. 1. Recap and Prerequisites In the previous example, we created an Oracle Advanced Queue (AQ) and some related JMS objects in WebLogic Server to be able to access it via JMS. Here are the objects which were created and their names and JNDI names: Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2 . Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will write a simple XML message to the AQ JMS queue via the JMS adapter, based on the following XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                xmlns="http://www.example.org"                targetNamespace="http://www.example.org"                elementFormDefault="qualified">  <xsd:element name="exampleElement" type="xsd:string">  </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project   JmsAdapterWriteAqJms  and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and select SOA Tier > SOA Project as its type. Name it JmsAdapterWriteAqJms . When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteAqJms too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd  and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the XSD item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Advanced Queueing AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the connection factory created earlier is located. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Produce Operation Parameters Destination Name: Wait for the list to populate. (Only foreign servers are listed here, because Oracle Advanced Queuing was selected earlier, in step 3) .         Select the foreign server destination created earlier, AqJmsForeignDestination (queue) . This will automatically populate the Destination Name field with the name of the foreign destination, queue/USERQUEUE . JNDI Name: The JNDI name to use for the JMS connection. This is the JNDI name of the connection pool created in the WebLogic Server.JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime. In our example, this is the value eis/aqjms/UserQueue Messages URL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement : string . Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow.   This completes the steps at the composite level. 3. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 4. Compile and Deploy the Composite Compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ----  Deployment finished.  ---- in the Deployment frame, if the deployment was successful. 5. Test the Composite Execute a Test Instance In a browser, log in to the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation. Navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite) and click on  JmsAdapterWriteAqJms [1.0] , then press the Test button. Enter any string into the text input field, for example “Test message from JmsAdapterWriteAqJms” then press Test Web Service. If the instance is successful, you should see the same text you entered in the Response payload frame. Monitor the Advanced Queue The test message will be written to the advanced queue created at the top of this sample. To confirm it, log in to the database as AQJMSUSER and query the MYQUEUETABLE database table. For example, from a shell window with SQL*Plus sqlplus aqjmsuser/aqjmsuser SQL> SELECT user_data FROM myqueuetable; which will display the message contents, for example Similarly, you can use the JDeveloper Database Navigator to view the contents. Use a database connection to the AQJMSUSER and in the navigator, expand Queues Tables and select MYQUEUETABLE. Select the Data tab and scroll to the USER_DATA column to view its contents. This concludes this example. The following post will be the last one in this series. In it, we will learn how to read the message we just wrote using a BPEL process and AQ JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Random process hangs after clean install

    - by Toshe
    After installing fresh Kubuntu 11.04 Natty on my desktop PC, I experienced some issues with application and process hangs. There is also a problem with my USB 3 hard disk. These sort of problems did not happen on Kubuntu 10.10 installed on the same PC (on separate partition). The hangs manifest themselves with kernel log messages like these: [ 960.480151] INFO: task amarok:2505 blocked for more than 120 seconds. [ 960.480153] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 960.480155] amarok D 0000000000000000 0 2505 1 0x00000000 [ 960.480158] ffff8800a556bb38 0000000000000086 ffff8800a556bfd8 ffff8800a556a000 [ 960.480162] 0000000000013d00 ffff8800cb7f3178 ffff8800a556bfd8 0000000000013d00 [ 960.480165] ffffffff81a0b020 ffff8800cb7f2dc0 ffffea000242ac58 ffff88012704c870 [ 960.480169] Call Trace: [ 960.480172] [<ffffffff815c19f7>] __mutex_lock_slowpath+0xf7/0x180 [ 960.480175] [<ffffffff815c144b>] mutex_lock+0x2b/0x50 [ 960.480180] [<ffffffffa0d0ad42>] video_open+0x102/0x400 [cx8800] [ 960.480183] [<ffffffff815c2cbe>] ? _raw_spin_lock+0xe/0x20 [ 960.480186] [<ffffffff8117c55d>] ? __d_lookup+0x10d/0x170 [ 960.480191] [<ffffffffa0ca0731>] v4l2_open+0x101/0x130 [videodev] [ 960.480194] [<ffffffff81168f4a>] chrdev_open+0xda/0x1f0 [ 960.480197] [<ffffffff81168e70>] ? chrdev_open+0x0/0x1f0 [ 960.480200] [<ffffffff81162cee>] __dentry_open+0xce/0x2f0 [ 960.480202] [<ffffffff8116ef33>] ? generic_permission+0x23/0xc0 [ 960.480205] [<ffffffff811641e1>] nameidata_to_filp+0x71/0x80 [ 960.480208] [<ffffffff811733c8>] finish_open+0xc8/0x1b0 [ 960.480210] [<ffffffff811725b7>] ? do_path_lookup+0x87/0x160 [ 960.480213] [<ffffffff81173b88>] do_filp_open+0x2c8/0x7c0 [ 960.480216] [<ffffffff81172902>] ? user_path_at+0x62/0xa0 [ 960.480219] [<ffffffff81131d4d>] ? handle_mm_fault+0x16d/0x250 [ 960.480222] [<ffffffff812e6c47>] ? __strncpy_from_user+0x27/0x60 [ 960.480225] [<ffffffff81180ea7>] ? alloc_fd+0xf7/0x150 [ 960.480228] [<ffffffff8116425a>] do_sys_open+0x6a/0x150 [ 960.480230] [<ffffffff81164360>] sys_open+0x20/0x30 [ 960.480233] [<ffffffff8100c002>] system_call_fastpath+0x16/0x1b [ 1080.480027] INFO: task knotify4:1663 blocked for more than 120 seconds. [ 1080.480030] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1080.480032] knotify4 D 0000000000000000 0 1663 1 0x00000000 [ 1080.480036] ffff880123a2bb28 0000000000000082 ffff880123a2bfd8 ffff880123a2a000 [ 1080.480040] 0000000000013d00 ffff880121003178 ffff880123a2bfd8 0000000000013d00 [ 1080.480044] ffff8800cb7e16e0 ffff880121002dc0 ffffffff81060a27 ffff88012704c870 [ 1080.480048] Call Trace: [ 1080.480054] [<ffffffff81060a27>] ? mutex_spin_on_owner+0x97/0xd0 [ 1080.480059] [<ffffffff815c19f7>] __mutex_lock_slowpath+0xf7/0x180 [ 1080.480069] [<ffffffff812e4f61>] ? vsnprintf+0x221/0x620 [ 1080.480072] [<ffffffff815c144b>] mutex_lock+0x2b/0x50 [ 1080.480076] [<ffffffffa0b2d10f>] cx8802_request_acquire+0x5f/0xf0 [cx8802] [ 1080.480081] [<ffffffffa0e87e08>] mpeg_open+0x78/0x270 [cx88_blackbird] [ 1080.480084] [<ffffffff8117c55d>] ? __d_lookup+0x10d/0x170 [ 1080.480092] [<ffffffffa0ca0731>] v4l2_open+0x101/0x130 [videodev] [ 1080.480096] [<ffffffff81168f4a>] chrdev_open+0xda/0x1f0 [ 1080.480099] [<ffffffff81168e70>] ? chrdev_open+0x0/0x1f0 [ 1080.480102] [<ffffffff81162cee>] __dentry_open+0xce/0x2f0 [ 1080.480105] [<ffffffff8116ef33>] ? generic_permission+0x23/0xc0 [ 1080.480108] [<ffffffff811641e1>] nameidata_to_filp+0x71/0x80 [ 1080.480111] [<ffffffff811733c8>] finish_open+0xc8/0x1b0 [ 1080.480113] [<ffffffff811725b7>] ? do_path_lookup+0x87/0x160 [ 1080.480116] [<ffffffff81173b88>] do_filp_open+0x2c8/0x7c0 [ 1080.480119] [<ffffffff81172902>] ? user_path_at+0x62/0xa0 [ 1080.480122] [<ffffffff811663f1>] ? get_empty_filp+0xa1/0x170 [ 1080.480125] [<ffffffff812e6c47>] ? __strncpy_from_user+0x27/0x60 [ 1080.480128] [<ffffffff81180ea7>] ? alloc_fd+0xf7/0x150 [ 1080.480131] [<ffffffff8116425a>] do_sys_open+0x6a/0x150 [ 1080.480134] [<ffffffff81164360>] sys_open+0x20/0x30 [ 1080.480137] [<ffffffff8100c002>] system_call_fastpath+0x16/0x1b [ 1080.480147] INFO: task dolphin:1842 blocked for more than 120 seconds. [ 1080.480148] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1080.480150] dolphin D 0000000000000000 0 1842 1 0x00000004 [ 1080.480154] ffff8800cb4f3b38 0000000000000082 ffff8800cb4f3fd8 ffff8800cb4f2000 [ 1080.480157] 0000000000013d00 ffff8800cb7f4858 ffff8800cb4f3fd8 0000000000013d00 [ 1080.480161] ffffffff81a0b020 ffff8800cb7f44a0 ffffea000267b0d0 ffff88012704c870 [ 1080.480164] Call Trace: [ 1080.480168] [<ffffffff815c19f7>] __mutex_lock_slowpath+0xf7/0x180 [ 1080.480171] [<ffffffff815c144b>] mutex_lock+0x2b/0x50 [ 1080.480176] [<ffffffffa0d0ad42>] video_open+0x102/0x400 [cx8800] [ 1080.480179] [<ffffffff815c2cbe>] ? _raw_spin_lock+0xe/0x20 [ 1080.480181] [<ffffffff8117c55d>] ? __d_lookup+0x10d/0x170 [ 1080.480186] [<ffffffffa0ca0731>] v4l2_open+0x101/0x130 [videodev] [ 1080.480190] [<ffffffff81168f4a>] chrdev_open+0xda/0x1f0 [ 1080.480192] [<ffffffff81168e70>] ? chrdev_open+0x0/0x1f0 [ 1080.480195] [<ffffffff81162cee>] __dentry_open+0xce/0x2f0 [ 1080.480198] [<ffffffff8116ef33>] ? generic_permission+0x23/0xc0 [ 1080.480200] [<ffffffff811641e1>] nameidata_to_filp+0x71/0x80 [ 1080.480203] [<ffffffff811733c8>] finish_open+0xc8/0x1b0 [ 1080.480206] [<ffffffff811725b7>] ? do_path_lookup+0x87/0x160 [ 1080.480208] [<ffffffff81173b88>] do_filp_open+0x2c8/0x7c0 [ 1080.480211] [<ffffffff81172902>] ? user_path_at+0x62/0xa0 [ 1080.480214] [<ffffffff81131d4d>] ? handle_mm_fault+0x16d/0x250 [ 1080.480217] [<ffffffff812e6c47>] ? __strncpy_from_user+0x27/0x60 [ 1080.480220] [<ffffffff81180ea7>] ? alloc_fd+0xf7/0x150 [ 1080.480223] [<ffffffff8116425a>] do_sys_open+0x6a/0x150 [ 1080.480225] [<ffffffff81164360>] sys_open+0x20/0x30 [ 1080.480228] [<ffffffff8100c002>] system_call_fastpath+0x16/0x1b Attempts to kill the hung process are unsuccessful: root@deskpc:~# ps -ef |grep amarok myuser 2505 1 0 10:47 ? 00:00:00 /usr/bin/amarok root 2747 2020 0 11:06 pts/3 00:00:00 grep --color=auto amarok root@deskpc:~# kill -9 2505 root@deskpc:~# ps -ef |grep amarok myuser 2505 1 0 10:47 ? 00:00:00 /usr/bin/amarok root 2749 2020 0 11:06 pts/3 00:00:00 grep --color=auto amarok root@deskpc:~# kill -9 2505 root@deskpc:~# ps -ef |grep amarok myuser 2505 1 0 10:47 ? 00:00:00 /usr/bin/amarok root 2751 2020 0 11:06 pts/3 00:00:00 grep --color=auto amarok root@deskpc:~# When trying to access my external USB3 disk, the following kernel message is observed: [ 2169.330012] xhci_hcd 0000:06:00.0: Timeout while waiting for a slot [ 2169.330018] hub 3-0:1.0: couldn't allocate port 1 usb_device I am not sure the two problems (application hangs and USB3 timeouts are related) but they do not happen under Kubuntu 10.10. Judging by the dmesg messages, it looks to me that this is a kernel (or potentially kernel driver) problem, but not sure how to debug it. Any ideas? I ran apport-bug, but it advised me to post a question here first. Shall I report the issue on the official K/Ubuntu bugzilla?

    Read the article

  • Bring NSRunAlertPanel to Front in Background Process

    - by mon4goos
    If you call NSRunAlertPanel() from a background process in Cocoa, the dialogue does not come to the front and instead stays behind other windows. This post (http://stackoverflow.com/questions/2639479/nsrunalertpanel-shows-up-behind-the-active-window) shows that you can bring the dialogue to the front if you convert the process to a foreground process. If you keep the process a background process, however, is there any way to achieve this behavior?

    Read the article

  • Bring NSRunAlertPanel Dialogue to Front in Background Process

    - by mon4goos
    If you call NSRunAlertPanel() from a background process in Cocoa, the dialogue does not come to the front and instead stays behind other windows. This post shows that you can bring the dialogue to the front if you convert the process to a foreground process. If you keep the process a background process, however, is there any way to achieve this behavior?

    Read the article

  • pthread and child process data sharing in C

    - by mustafabattal
    hi everyone, my question is somewhat conceptual, how is parent process' data shared with child process created by a "fork()" call or with a thread created by "pthread_create()" for example, are global variables directly passed into child process and if so, does modification on that variable made by child process effect value of it in parent process? i appreciate partial and complete answers in advance, if i'm missing any existing resource, i'm sorry, i've done some search on google but couldn't find good results thanks again for your time and answers

    Read the article

  • Focus process window, ShowWindow vs System Tray

    - by ais
    I try open process window, this code work if window state is minimize, but if program in system tray window isn't opened. [DllImport("User32")] private static extern int SetForegroundWindow(IntPtr hwnd); [DllImport("User32")] private static extern bool ShowWindow(IntPtr hWnd, int nCmdShow); private static void ShowWindow(Process process) { ShowWindow(process.MainWindowHandle, SW_RESTORE); SetForegroundWindow(process.MainWindowHandle); }

    Read the article

  • What is Apache process model?

    - by Morgan Cheng
    I have been googling this question for some time but got no answers. What's the Apache process model? By process model, I mean how Apache manage process or thread to handling HTTP request. Does it fork one process for each HTTP request? Does it have process/thread pool? Can we config it? Is there any online doc for such Apache details?

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • dpkg E: Sub-process /usr/bin/dpkg returned an error

    - by user81269
    I decided to shift around my partitions on my hard drive for a fresh install of Kubuntu. I booted my Ubuntu 10.10 live disc, shifted everything around and attempted to install grub and it didn't work, so I burnt an Ubuntu 12.04 disc and installed it. I got the computer working and wanted to install some packages, but didn't have an internet connection at the time. So (I know this was stupid) I got some debs from previous versions of Ubuntu, as I needed my music, and the other install took a long of time to boot. Once I got my internet connection back, everything worked ok, for a little while. Then I stumbled upon this problem after removing ten broken packages using synaptic: drhax@Spamotard:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: libgtk2.0-cil 0 upgraded, 0 newly installed, 1 to remove and 417 not upgraded. 1 not fully installed or removed. After this operation, 2,638 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 103052 files and directories currently installed.) Removing libgtk2.0-cil ... E: File does not exist: /usr/share/cli-common/packages.d/policy.2.6.gtk-dotnet.installcligac dpkg: error processing libgtk2.0-cil (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: libgtk2.0-cil E: Sub-process /usr/bin/dpkg returned an error code (1) Help would be appreciated. This is my first post, but I do know fair bit about Ubuntu, so feel free to point out any stupid mistakes I have made.

    Read the article

  • SOA &amp; BPM Integration Days February 23rd &amp; 24th 2011 D&uuml;sseldorf Germany

    - by Jürgen Kress
    The key German SOA Experts will present at the SOA & BPM Integration Days 2011 for all German SOA users it’s the key event in 2001, make sure you register today! Speakers include: Torsten Winterberg OPITZ Hajo Normann HP Guido Schmutz Trivadis Dirk Krafzig SOAPARK Niko Köbler OPITZ Clemens Utschig-Utschig Boehringer Ingelheim Nicolai Josuttis IT communication Bernd Trops SOPERA   Berthold Maier Oracle Deutschland Jürgen Kress Oracle EMEA The agenda is exciting for all SOA and BPA experts! See you in Düsseldorf Germany!   Es ist soweit: Die Entwickler Akademie und das Magazin Business Technology präsentieren Ihnen die ersten SOA, BPM & Integration Days! Das einzigartige Trainingsevent versammelt für Sie die führenden Köpfe im deutschsprachigen Raum rund um SOA,  BPM und Integration.  Zwei Tage lang erhalten Sie ohne Marketingfilter wertvolle Informationen, Erkenntnisse und  Erfahrungen aus der täglichen Projektarbeit. Sie müssen sich nur noch entscheiden, welche persönlichen Themenschwerpunkte Sie setzen möchten. Darüber hinaus bietet sich Ihnen die einmalige Chance, ihre Fragen und Herausforderungen mit den Experten aus der Praxis zu diskutieren. In den SOA, BPM & Integration Days profitieren Sie von der geballten Praxiserfahrung der Autorenrunde des SOA Spezial Magazins (Publikation des Software & Support Verlags), bekannter Buchautoren und langjährigen Sprechern der JAX-Konferenzen. Dieses Event sollten Sie auf keinen Fall verpassen!  Details und Anmeldung unter: www.soa-bpm-days.de For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA & BPM Integration Days,SOA,BPM,Hajo Normnn,Torsten Winterberg,Clemens Utschig-Utschig,Berthold Maier,Bernd Trops,Guido Schmutz,Nicolai Josuttis,Niko Köbler,Dirk Krafzig,Jürgen Kress,Oracle,Jax,Javamagazin,entwickler adademiem,business technology

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Improving the speed of writing code in C#

    - by Robert Harvey
    Laugh if you want, but I used to develop substantial line-of-business applications in VB6, long before the .NET framework came along. Why, when I was your age, we used to walk two miles in the snow, uphill. Both ways... Love it or hate it, VB6 had a REPL-like feel, and a very rapid development cycle. I would like to know how to come closer to that process in C#. In VB6, I could write a function, execute it, debug it and have it fully functional in a few minutes. I am told this is how the Lisp crowd works. It's a very rapid-fire style of programming. In C# I write a function, then I write a unit test for that function (which is OK, I understand the value of that), then I right-click, run test, wait for the project to compile (takes about 10 seconds right now, which would be an eternity for a REPL loop), and get an exception. Honestly, this feels more like my junior college days, when I used to feed punch cards into a hopper and wait for a printout (exaggerating only slightly for effect). Additionally, my tendency nowadays is to make everything public while I'm testing it. Unit testing with private accessors works fine, but you can't trace through the code (unless, of course, I'm doing something wrong) while you're using them. So what I'd like to know is, what adjustments have you made to your development process in C# to streamline it, and make it possible to write and verify your code very rapidly?

    Read the article

  • GL Expense Allocation in OPM Actual Costing

    - by Annemarie Provisero
    ADVISOR WEBCAST:  GL Expense Allocation in OPM Actual Costing PRODUCT FAMILY: Oracle Process Manufacturing     March 16, 2011 at 8 am PT, 9 am MT, 11 am ET This session is designed for customers and functional users, and discusses the setups necessary for expense allocation (flow of expenses from general ledger balances to Oracle Process Manufacturing Costing then back to GL, Examples include screen shots of test cases). TOPICS WILL INCLUDE: Concept of GL expense allocation in OPM. Situation where this functionality is more suitable. Setups needed to make this work. Examples with screen shots (Performed test cases). Known gaps in this functionality. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This HOL10233 - Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Nob Hill CD In this Hands-on Lab, learn how to integrate Oracle E-Business Suite with third-party applications in your ecosystem with Oracle Application Integration Architecture (AIA) Foundation Pack.This hands-on lab focuses on SOA-based integration with Oracle E-Business Suite, using Oracle E-Business Suite Integrated SOA Gateway or Oracle Applications Adapter to orchestrate a process across disparate applications in your ecosystem.Objectives for this session are to: Learn how to design and build an integration, from functional definition to development Learn how to automatically build services with robust error handling logic Learn how to discover and reuse services available in Oracle E-Business Suite

    Read the article

  • check what process was causing the problem of high cpu load

    - by linuxk
    I'm running nginx wordpress server in KVM using 12.04 server x86. It was running very well about 4 month until 2 hours ago. I found that my website is down and no ping response. Virt-manager logged high cpu load(plz see the picture below) before unexpected shut down. I want to know what process caused unexpected shutdown. The following log files make me think my server is attacked. Any suggestions and help would be appreciated. kern.log and syslog showed me same output. Nov 11 03:54:11 www kernel: [1344541.156239] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0.0.0.0 DST=224.0. 0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Nov 11 03:54:11 www kernel: [1344541.156315] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0101:080a:2334:c90 0:0100:0000:0000:0000 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 /nginx/access.log showed me 119.235.237.17 - - [11/Nov/2012:03:45:29 +0900] "GET /blog HTTP/1.1" 200 30493 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)" my-server-ip - - [11/Nov/2012:11:05:30 +0900] "POST /wp-cron.php?doing_wp_cron=13 HTTP/1.0" 499 0 "-" "WordPress/3.4.2; http://mywebsite.com" Server turned on in here. 119.235.237.16 - - [11/Nov/2012:11:05:30 +0900] "GET /blog HTTP/1.1" 200 32935 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)"

    Read the article

  • Difficulty / Process for mobile 2D multiplayer RPG game

    - by user2827214
    I'm looking to develop a 2D multiplayer RPG game for Android, with a bird's eye view similar to that of zelda/pokemon. The game is very simple in all ways since my intent is for thousands of players to occupy the same world which I imagine requires good performance. However, I am unsure about the performance requirements of two properties: the tile map that is used as a background is dynamic (interactive). For example, a player steps in the water, and the water turns black. Every tile in the game does this. the tile map is the same object used for all players, but it is displayed differently on each user's mobile device, even though the players exist in the same world. For example, the water that turned black is displayed as red on all other players' screens. I have knowledge of java, but almost none regarding game dev. tools. Is there a best process for these requirements? Should I develop in pure java, or use some tool like Slick2D etc.? How performance intensive are these properties, if even possible? Edit: There are no collisions in the game or difficult animations, I imagining simply changing colors of the tiles like in the examples, and a client-server architecture

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • How to share code as open source?

    - by Ethel Evans
    I have a little program that I wrote for a local group to handle a somewhat complicated scheduling issue for scheduling multiple meetings in multiple locations that change weekly according to certain criteria. It's a niche need, but I wouldn't be surprised if there are other groups that could use software like this. In fact, we've had requests from others for directions on starting a group like this, and if their groups get as big, they might also want special software to help with scheduling. I plan to continue developing the program and eventually make it an online web app, but a very simple alpha version is completed as a console app. I'd like to make it available as open source, but I have no idea what kind of process I should go through first. Right now, all I have is Java code, not even unit-tested thoroughly. I haven't shown the code to anyone else. There is no documentation. I don't know where I would put the code so others could access it. I don't know anything about licensing it. I don't know what kind of support people will expect from me if I release it as open source. I have no idea what else I should worry about. Can someone outline for me (or post an article(s) that outlines) the process of taking open source software from "coded" to "completed / available"? I really don't want to embarrass myself by doing things weirdly.

    Read the article

  • Dealing with institutionalized programmers.

    - by Singleton
    Some times programmers who work in a project for long time tend to get institutionalized. It is difficult to convince them with reasoning. Even if we manage to convince them they will be adamant to take suggestion on board. How do we handle the situation without developing friction in team? Institutionalized in terms of practices. I recently joined in a project where build &release process was made so complicated with unnecessary roadblocks. My suggestion was we can get rid of some of the development overheads(like filling few spreadsheets) just by integrating defect management and version controlling tools (both are IBM-Rational tools integration can be very easy and one-off effort). Also by using tools like Maven & Ant (project involves java and some COTS products) build & release can be simplified and reduce manual errors& intervention. I managed to convince and ready to put efforts for developing proof of concept. But the ‘Senior’ developer is not willing to take it on board. One reason could be the current process makes him valuable in team.

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Oracle Fusion Applications: Best Practices in Integration Design Patterns”

    - by Lionel Dubreuil
    Don’t miss this “CON8685 - Oracle Fusion Applications: Best Practices in Integration Design Patterns “ session: Speakers: Rajesh Raheja - Senior Director, Development, Oracle Ravi Sankaran - Director, Applications Development, Oracle Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Palace Hotel - Telegraph Oracle Fusion Applications provide various ways to integrate their functional capabilities with other Oracle applications as well as third-party and legacy applications. In this session, you will learn the patterns used when communicating with Oracle Fusion Applications with a SOA approach. It addresses items related to identifying the integration artifacts available, also known as assets, in Oracle Enterprise Repository; how to invoke synchronous and asynchronous Web services; importing and exporting bulk data; and any integration issues to look out for. The patterns will be applicable to on-premises and SaaS/cloud deployment modes and are indicated as such. Objectives for this session are to: Highlight the various ways to integrate with Oracle Fusion Applications Showcase use of Oracle Fusion Middleware technologies for integration Describe best practices and design patterns for integration Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >