Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 3/67 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • I/O Reads and Writes per process Unix/SunOS?

    - by Alex
    Can prstat or something similar tell me how many reads/writes a process is doing similar to how Task Manager on Windows can show I/O Reads, I/O Writes and many other I/O columns per process? I'm using SunOS 5.10, but feel free to post other Unix flavours too.

    Read the article

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • If a blogger writes a whole article about my website, how important are anchor texts?

    - by Noam
    If there is a full article about my web-service, with my brand name in the title, and many relevant keywords that I would like Google to consider in my rankings, and links to my web-site with simple anchor text such as <brand name> and <page title>. Does it make a big difference if I get links to the actual keywords I'm after, or is it enough that these keywords are part of the written text?

    Read the article

  • Xubuntu: how do I automatically mount external NTFS drive with writes allowed?

    - by user74372
    i would have thought mine was such a common question that there would be a simple solution already built in to xubuntu - but there isnt. i have 2 separate external hard disks and connect them to a usb port at different times - i would like them to be automatically mounted as read/write, but apparently the designers of gnome-volume-manager decided that shouldnt be possible and they are mounted as read-only in fact, i can write new files to them, but cannot then delete the new files i just wrote! is there a workaround? i read somewhere that etc/fstab doesn't apply to removable media, which are mounted by gnome-volume-manager and therefore cannot be unmounted by a user

    Read the article

  • Does booting a live USB cause writes to occur to it?

    - by InkBlend
    I know that Ubuntu will not and can not write to a live CD-R/DVD+-R when it is booting from it, as it is a read-only medium. However, the procedure (at least on the data level) for running Ubuntu off an optical disc is different from a USB drive, which is usually write-enabled. if I make a live USB, turn off persistent files, and boot from it, will any data be written to the USB drive (e.g. settings, error logs, temporary files)? Or will Ubuntu just read from it, nothing else?

    Read the article

  • How to cope with "Hidden IT..." Who writes and maintains the ad-hoc software applications?

    - by matcauthon
    Bigger companies usually have the problem, that it is not possible to write all programs employees want (to save time and to optimize processes) due to a lack of staff and money. Then hidden programs will be created by some people having (at least some) coding experience (or by cheap students/interns...). Under some circumstances these applications will raise in importance and spread from one user to a whole department. Then there is the critical point: Who will maintain the application, add new features, ...? And this app is critical. It IS needed. But the intern has left the company. No one knows how it works. You only have a bunch of sources and some sort of documentation. How do you cope with these applications? Can you "forbid" them? Can you control them? Do you have to write all apps (not Excel macros or some minor stuff) in the IT department?

    Read the article

  • SQL Server 2008 BULK INSERT causes more reads than writes. Why?

    - by sh1ng
    I've huge a table (a few billion rows) with a clustered index and two non-clustered indices. A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms). It's very confusing to me. Why do reads exceed writes? How can I reduce it? update query insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS) No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%) And one more thing DB stored on RAID50 with stripe size 256K

    Read the article

  • Are reads and (transactional) writes faster for entities of the same group than otherwise?

    - by indiehacker
    What advantage is there to designing child-parent relationships, which allow us to do writes in transactions, when there is never a real concern for consistency and contention and those sort of more complex issues? Does it make writes and reads faster? Consider my situation where there are many .png images that are referenced to one mosaic layer, and these .png images are written just once by a single user. The user can design many mosaic layers and her mosaic layers and referenced image entities are never changed/updated, they are just deleted some time in the future. Other users can come to the web project site and interactively view the mosaic layer as different layouts/configurations of the images as they play (query) with different criteria. So reads should be very fast. So there is no real worry of contention, or users conflicting with one another with writing new image entities. And because of that I am assuming there is no "requirement" for the .png image entities to be grouped by their same mosaic layer in child-parent relationship. However, perhaps, since the documentation says they are stored close to one another, if the many image entities were grouped as children to a single mosaic layer parent than this has the advantage that the writing (in transaction) and reading will happen much faster?

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • How to specify the Event Log Source where ASP.NET writes unhandled exceptions?

    - by Knagis
    By default ASP.NET writes any unhandled exception to the default ASP.NET X.Y.Z.0 event log source. Is it possible to specify either configuration that the events and exceptions for a particular application has to be logged in a specific event log Source? The reason is that I would want all issues directly related to my application to be stored in a separate event log category that can then be filtered against.

    Read the article

  • Does OpenCL allow concurrent writes to same memory address?

    - by Wonko
    Is two (or more) different threads allowed to write to the same memory location in global space in OpenCL? The write is always changing a uchar from 0 to 1 so the outcome should be predictable, but I'm getting erratic results in my program, so I'm wondering if the reason can be that some of the writes fail. Could it help to declare the buffer write-only and copy it to a read-only buffer afterwards?

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How do I know if my disks are being hit with too many I/O reads or writes or both?

    - by Mark F
    I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. How do I really know what the max I/O numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an I/O bound problem. Is there a better way to predict when I/O is reaching its limits? This article was interesting but not giving the answer I desire. So, is my best bet surrounding just looking at 'CPU I/O WAIT'? There must be a more reactive method than this.

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • raid 5 creation (using mdadm) lots of read/writes on creation: is this normal?

    - by Gbrits
    I created a software raid 5 disk using: mdadm -C /dev/md2 -l5 -n4 /dev/sd[i-l] at the same time I'm using dstat to see io-activity: dstat -c -d -D total,sda1,md2,sdi,sdj,sdk,sdl -l -m -n and notice that disks sd[i-k] are all read from and sdl is written to. Now, I do understand that raid5 has to be configured, but it takes a really long time and all disks are clean & formatted (using xfs) so I figure there might be some kind of shortcut to skip (unnecessary? ) checking.. Is it? The creation is part of a time-critical nightly batch-process (run on amazon ec2) so it's not a one-time thing. Thanks, Geert-Jan

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How do I know if my disks are being hit with too much IO reads or writes or both?

    - by Mark F
    Hi All, So I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. But how do I really know what the max IO numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an IO bound problem. Is there a better way to predict when IO is reaching its limits? This article was interesting but not giving the answer I desire. "http://serverfault.com/questions/61510/linux-how-can-i-see-whats-waiting-for-disk-io". So is my best bet surrounding just looking at 'CPU IO WAIT'? There must be a more reactive method for this? Best, M

    Read the article

  • CONSOLE appender was turned off. But JBoss still writes some traces to STDOUT

    - by Vladimir Bezugliy
    CONSOLE appender was turned off. But JBoss still writes some traces to STDOUT during startup. Why? C:\as\jboss-5.1.0.GA\bin>start.cmd Calling C:\as\jboss-5.1.0.GA\bin\run.conf.bat =============================================================================== JBoss Bootstrap Environment JBOSS_HOME: C:\as\jboss-5.1.0.GA JAVA: C:\as\jdk1.6.0_07\bin\java JAVA_OPTS: -Dprogram.name=run.bat -Xms512M -Xmx758M -XX:MaxPermSize=256M -server CLASSPATH: C:\as\jboss-5.1.0.GA\bin\run.jar =============================================================================== 13:11:25,080 INFO [ServerImpl] Starting JBoss (Microcontainer)... 13:11:25,080 INFO [ServerImpl] Release ID: JBoss [The Oracle] 5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053) 13:11:25,080 INFO [ServerImpl] Bootstrap URL: null 13:11:25,080 INFO [ServerImpl] Home Dir: C:\as\jboss-5.1.0.GA 13:11:25,080 INFO [ServerImpl] Home URL: file:/C:/as/jboss-5.1.0.GA/ 13:11:25,080 INFO [ServerImpl] Library URL: file:/C:/as/jboss-5.1.0.GA/lib/ 13:11:25,080 INFO [ServerImpl] Patch URL: null 13:11:25,080 INFO [ServerImpl] Common Base URL: file:/C:/as/jboss-5.1.0.GA/common/ 13:11:25,080 INFO [ServerImpl] Common Library URL: file:/C:/as/jboss-5.1.0.GA/common/lib/ 13:11:25,080 INFO [ServerImpl] Server Name: web 13:11:25,080 INFO [ServerImpl] Server Base Dir: C:\as\jboss-5.1.0.GA\server 13:11:25,080 INFO [ServerImpl] Server Base URL: file:/C:/as/jboss-5.1.0.GA/server/ 13:11:25,080 INFO [ServerImpl] Server Config URL: file:/C:/as/jboss-5.1.0.GA/server/web/conf/ 13:11:25,080 INFO [ServerImpl] Server Home Dir: C:\as\jboss-5.1.0.GA\server\web 13:11:25,080 INFO [ServerImpl] Server Home URL: file:/C:/as/jboss-5.1.0.GA/server/web/ 13:11:25,080 INFO [ServerImpl] Server Data Dir: C:\as\jboss-5.1.0.GA\server\web\data ...

    Read the article

  • Why is this Javascript that writes out a Google Ad not displaying properly on the iPhone?

    - by Dave M G
    I have this Javacsript code that checks to see if there is a DIV named "google-ad", and if there is, it writes in a the necessary code to display the Google Ad. This works great in browsers like Firefox, Chrome, Safari on Mac, and Android. However, when I bundle this code with Adobe's Phonegap Build, and deploy it to iPhones, it behaves strangely. It launches a browser window displaying just the Google Ad alone. In order to keep using my app, every time I change a page and a new Google Ad is loaded, I have to close the browser window to get back to the app. Why is this code launching browser windows outside of my Phonegap app on iPhone? if(document.getElementById("google-ad") && document.getElementById("google-ad") !== null && document.getElementById("google-ad") !== "undefined") { if(userStatus.status == 0) { document.getElementById("centre").style.padding = "137px 0 12px 0"; document.getElementById("header-container").style.margin = "-138px 0 0 0"; document.getElementById("header-container").style.height = "132px"; document.getElementById("header-username").style.top = "52px"; document.getElementById("google-ad").style.height = "50px"; document.getElementById("google-ad").style.width = "320px"; document.getElementById("google-ad").style.backgroundColor = "#f0ebff"; document.getElementById("google-ad").style.display = "block"; window["google_ad_client"] = 'ca-pub-0000000000000000'; window["google_ad_slot"] = "00000000"; window["google_ad_width"] = 320; window["google_ad_height"] = 50; window.adcontainer = document.getElementById('google-ad'); window.adhtml = ''; function mywrite(html) { adhtml += html; adcontainer.innerHTML = adhtml; }; document.write_ = document.write; document.write = mywrite; script = document.createElement('script'); script.src='http://pagead2.googlesyndication.com/pagead/show_ads.js'; script.type='text/javascript'; document.body.appendChild(script); }

    Read the article

  • Should we hire someone who writes C in Perl?

    - by paxdiablo
    One of my colleagues recently interviewed some candidates for a job and one said they had very good Perl experience. Since my colleague didn't know Perl, he asked me for a critique of some code written (off-site) by that potential hire, so I had a look and told him my concerns (the main one was that it originally had no comments and it's not like we gave them enough time). However, the code works so I'm loathe to say no-go without some more input. Another concern is that this code basically looks exactly how I'd code it in C. It's been a while since I did Perl (and I didn't do a lot, I'm more a Python bod for quick scripts) but I seem to recall that it was a much more expressive language than what this guy used. I'm looking for input from real Perl coders, and suggestions for how it could be improved (and why a Perl coder should know that method of improvement). You can also wax lyrical about whether people who write one language in a totally different language should (or shouldn't be hired). I'm interested in your arguments but this question is primarily for a critique of the code. The spec was to successfully process a CSV file as follows and output the individual fields: User ID,Name , Level,Numeric ID pax, Pax Morgan ,admin,0 gt," Turner, George" rubbish,user,1 ms,"Mark \"X-Men\" Spencer","guest user",2 ab,, "user","3" The output was to be something like this (the potential hire's code actually output this): User ID,Name , Level,Numeric ID: [User ID] [Name] [Level] [Numeric ID] pax, Pax Morgan ,admin,0: [pax] [Pax Morgan] [admin] [0] gt," Turner, George " rubbish,user,1: [gt] [ Turner, George ] [user] [1] ms,"Mark \"X-Men\" Spencer","guest user",2: [ms] [Mark "X-Men" Spencer] [guest user] [2] ab,, "user","3": [ab] [] [user] [3] Here is the code they submitted: #!/usr/bin/perl # Open file. open (IN, "qq.in") || die "Cannot open qq.in"; # Process every line. while (<IN>) { chomp; $line = $_; print "$line:\n"; # Process every field in line. while ($line ne "") { # Skip spaces and start with empty field. if (substr ($line,0,1) eq " ") { $line = substr ($line,1); next; } $field = ""; $minlen = 0; # Detect quoted field or otherwise. if (substr ($line,0,1) eq "\"") { $line = substr ($line,1); $pastquote = 0; while ($line ne "") { # Special handling for quotes (\\ and \"). if (length ($line) >= 2) { if (substr ($line,0,2) eq "\\\"") { $field = $field . "\""; $line = substr ($line,2); next; } if (substr ($line,0,2) eq "\\\\") { $field = $field . "\\"; $line = substr ($line,2); next; } } # Detect closing quote. if (($pastquote == 0) && (substr ($line,0,1) eq "\"")) { $pastquote = 1; $line = substr ($line,1); $minlen = length ($field); next; } # Only worry about comma if past closing quote. if (($pastquote == 1) && (substr ($line,0,1) eq ",")) { $line = substr ($line,1); last; } $field = $field . substr ($line,0,1); $line = substr ($line,1); } } else { while ($line ne "") { if (substr ($line,0,1) eq ",") { $line = substr ($line,1); last; } if ($pastquote == 0) { $field = $field . substr ($line,0,1); } $line = substr ($line,1); } } # Strip trailing space. while ($field ne "") { if (length ($field) == $minlen) { last; } if (substr ($field,length ($field)-1,1) eq " ") { $field = substr ($field,0, length ($field)-1); next; } last; } print " [$field]\n"; } } close (IN);

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >