Search Results

Search found 20163 results on 807 pages for 'struct size'.

Page 134/807 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • Background Image not showing up in IE8

    - by Davey
    So I have a tiny header image that repeats on the x axis, but for some reason it won't show up in IE8. Anyone know a work around? Thanks in advanced. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta content='' name='description' /> <meta content='' name='keywords' /> <link rel="stylesheet" type="text/css" href="style.css" media="screen" /> <title>Book Site</title> </head> <body> <div id="wrapper"> <div id="header"> <div id="title"> <span class="maintitle">Site Title Goes Here</span> <br /> <span class="subtitle">Transitional Justice, Post-Conflict Reconstruction & Reconciliation in Rwanda and Beyond Phil Clark and Zachary D. Kaufman, editors</span> </div> <img class="thebook" src="images/thebook.png" /> <span class="bookblurb"> <span class="bookbuy">Buy the book</span> get it online <br /> from Columbia, Hurst or your favorite reseller </span> </div> <div id="navbar"> <ul> <li>HOME</li> <li>ABOUT THE BOOK</li> <li>AUTHORS</li> <li>NEWS & EVENTS</li> <li>KIGALI PUBLIC LIBRARY</li> <li>CONTACT US</li> </ul> </div> <div id="content"> <div id="blockone"> <div id="polaroid"> <img class="polaroid" src="images/polaroid.png" /> <br /> <span class="roidplace">Gisimba Memorial Centre</span> <br /> <span class="roidname">Kigali, Rwanda</span> </div> <div id="textblockone"> <h3>An incisive analysis of genocide and its aftermath</h3> <br /> <span class="description">In After Genocide leading scholars and practitioners analyse the political, legal and regional impact of events in post-genocide Rwanda within the broader themes of transitional justice, reconstruction and reconciliation. Given the forthcoming fifteenth anniversary of the Rwandan genocide, and continued mass violence in Africa, especially in Darfur, the Democratic Republic of Congo (DRC) and northern Uganda, this volume is unquestionably of continuing relevance. </span> </div> </div> <div id="form"> <div id="statement"> This book should be labeled for the mature individual only. But for that mature individual it is of extreme interest. It shows, far from any Manichean stereotyping, the many facets of having to try to live in an impossibly complex social and human situation. Highly recommended. <br /><br /> <span class="author">-Grard Prunier</span> <br /><span class="bookname">The Rwanda Crisis: History of a Genocide (Hurst, 1995)</span> </div> <div id="contactform"> <span class="contactus">Contact us for additional information and site updates</span> <br /> <span class="theform"> <form class="forming"> Name: <input type="text" name="firstname" /> <br /> Title: <input type="text" name="title" /> <br /> Institution: <input type="text" name="institution" /> <br /> Email: <input type="text" name="email" /> <br /> Message: <input type="text" name="message" class="message" /> </form> </span> </div> </div> </div> <div id="footer"> <p class="footernav">&copy; 2008 After Genocide <span class="footerlinks">Sitemap | Terms | Privacy | Contact </span> <span class="plug">Web design by <span class="avity">Avity</span> </p> </div> </div> </body> </html> ----------------css------------------- html, body { margin:0; padding:0; background-color:#fdffe3; font-family: Arial, Helvetica, sans-serif; } #wrapper { width:1020px; margin:0 auto; } /*begin header style*/ #header { background:url("images/headback.png")repeat-x; width:1020px; height:120px; font-family:arial; position:relative; } #title { width:565px; height:100px; float:left; margin:20px 0 0 100px; } .maintitle { font-size:40px; } .subtitle { font-size:13px; } .thebook { float:left; margin:10px 0 0 30px; border:2px solid #666666; } .bookblurb { float:left; width:110px; margin:15px 0 0 15px; font-size:13px; } .bookbuy { font-weight:bold; font-size:14px; } /*end header style*/ /*begin navigation style*/ #navbar { margin:5px 0 0 0; height: 30px; width: 1020px; background-color: #3a3e30; } #navbar ul { padding: 0px; font-family: Arial, Helvetica, sans-serif; font-size: 12px; color: #FFF; line-height: 30px; white-space: nowrap; margin:0 0 0 140px; } #navbar ul li { list-style-type: none; display: inline; margin:0 40px 0 0; } /*end navigation style*/ /*begin content style*/ #content { width:775px; margin:0 auto; } #blockone { margin:25px 0 0 0; } #polaroid { float:left; width:230px; } .roidplace { font-weight:bold; font-size:11px; } .roidname { font-size:11px; margin:0 0 0 40px; } #textblockone { width:745px; margin:0 0 0 0; font-family: Arial, Helvetica, sans-serif; } .description { font-size:13px; } #form { background:url("images/formbackround.png") no-repeat; width:758px; height:231px; margin:80px 0 0 10px; } #statement { width:320px; margin:30px 0 0 30px; position:absolute; font-size:15px; font-style:italic; float:left; } .author { font-weight:bold; font-size:14; } .bookname { font-weight:bold; font-size:11px; color:#3f91ad; } #contactform { float:right; width:320px; margin:20px 30px 0 0; } .contactus { font-weight:bold; font-size:12px; } .theform { } .forming { } .message { height:50px; } #footer { width:1020px; height:65px; background-color:#dfdacc; margin:35px 0 0 0; font-size:13px; font-weight:bold; } .footernav { margin:30px 0 0 150px; position:absolute; width:1020px; } .footerlinks { margin:0 10px 0 10px; color:#0f77a9; } .plug { margin:0 0 0 175px; } .avity { color:#0f77a9; } Live site: http://cheapramen.com/testsite/

    Read the article

  • the rmagick manual is quite vague. can you help me by changing a pictures filetype and size?

    - by Joern Akkermann
    Hi, it's about RMagick with Ruby On Rails. I do the following: image = params[:image] # params[:image] is the image from the file-form. name = image.original_filename.scan(/[^\/\\]+/).last name = dir + t.day.to_s + t.month.to_s + t.year.to_s + t.hour.to_s + t.min.to_s + t.sec.to_s + name f = File.new(name, "wb") f.write image f.close image = Magick::Image.read(name) image = image.resize_to_fit(200, 250) f = File.new(name, "wb") f.write image.to_blob f.close Do I really need to first save and then change it? And how about changing not only the size, changing also the Filetype? I want a JPG with 60% quality. Please help me. Yours, Joern.

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Hard drive after PCB swap strange stuff

    - by ramyy
    I’ve done a PCB swap to my HDD. The HDD model is: WD6400AAKS-00A7B2. The original PCB PN matches the new one (first three letter groups), though the cache mismatches (16MB original, 8MB new). The Hardware store that made the swap told me it was hard to do the swap, they have done firmware adaptation. I can see that this firmware version does not match the original, (01.03B01 original, 05.04E05 new). Still I can see that the serial number and model of the drive is correct, the hard drive appeared normal in the BIOS, all the partitions show and everything appears normal. I have encountered three things though, I have left the drive non operated for 2-3 weeks after the swap to avoid corrupting the data or anything else the new PCB might cause, until I buy a new drive and backup the data. I got a drive, and when I powered the old drive manually (I have a laptop, I use a normal desktop power supply and a USB/SATA connector), I heard the motor start and I could hear ticking as if the motor’s somehow struggling to start, and then the motor sound starts again then the ticking, and so on.. I tried powering again it happened again. The third time it started normally and I could see everything normally. I took the chance and copied all the data over to the new drive. When I was done, I powered off the drive (after more than 25 hours of continuous operation), tried to power it up again and it did so normally, and so are the times I powered it up later; but I got very suspicious now. What could be the problem here? And what happened new, it used to power normally after the swap directly? The second thing that happened is that I found size differences with some files; some include movies, songs, (.iso) files for games, and programs. I could find the size is the same, but size on disk is a little more on the new drive for these files. . I’ve tried some of those files (with size differences) they worked fine. They are not too much but still make you suspicious of the integrity of the data copied; one cannot try if all files are working for about (580 GB) worth of data. I tried copying these files on the same partition they exist of the old drive; they are the same in size as when copied to the new drive (allocation unit size not the issue). I took an image of a partition (sector by sector including empty ones) and when I explore it, these file sizes are equal to the original (old drive); I copy them anywhere else their size on disk, increases, i.e becomes equal to the ones I copy from the old drive itself anywhere. Why the size difference and can one trust the integrity of the data?? The third thing is that when I connect my new external USB HDD, the partitions of the old HDD unmount and then mount again. Connected are: (USB mouse + Old HDD) then external HDD. Why that happens?? Considering the following: I compared the SMART reports from after the swap directly and after the copying, no error readings or reallocated sectors where reported. Here they are: http://www.image-share.com/ijpg-1939-219.html I later ran both WD data life guard tests and they came out passed. I’m worried for this drive since I must be sure the data is fine and safe on the new one, and I will consider it backup for the new one, since you can’t trust anything anymore. I hope you can forgive me for the length of the post, but couldn’t ignore any of the details, this hard drive contains very important data to me and I have to deal with the situation with great care.

    Read the article

  • Why is Postfix trying to connect to other machines SMTP port 25?

    - by TryTryAgain
    Jul 5 11:09:25 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.101]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3087]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3088]: connect to ab.xyz.com[10.41.0.101]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3087]: connect to ab.xyz.com[10.41.0.110]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3088]: connect to ab.xyz.com[10.41.0.110]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.102]:25: Connection refused Jul 5 11:09:30 relay postfix/smtp[3085]: connect to ab.xyz.com[10.41.0.102]:25: Connection refused Jul 5 11:09:30 relay postfix/smtp[3086]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Jul 5 11:09:30 relay postfix/smtp[3086]: connect to ab.xyz.com[10.41.0.102]:25: Connection refused Jul 5 11:09:55 relay postfix/smtp[3087]: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out Jul 5 11:09:55 relay postfix/smtp[3084]: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out Jul 5 11:09:55 relay postfix/smtp[3088]: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out Jul 5 11:09:55 relay postfix/smtp[3087]: connect to ab.xyz.com[10.41.0.135]:25: Connection refused Jul 5 11:09:55 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.110]:25: Connection refused Jul 5 11:09:55 relay postfix/smtp[3088]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Is this a DNS thing, doubtful as I've changed from our local DNS to Google's..still Postfix will occasionally try and connect to ab.xyz.com from a variety of addresses that may or may not have port 25 open and act as mail servers to begin with. Why is Postfix attempting to connect to other machines as seen in the log? Mail is being sent properly, other than that, it appears all is good. Occasionally I'll also see: relay postfix/error[3090]: 3F1AB42132: to=, relay=none, delay=32754, delays=32724/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.102]:25: Connection refused) I have Postfix setup with very little restrictions: mynetworks = 127.0.0.0/8, 10.0.0.0/8 only. Like I said it appears all mail is getting passed through, but I hate seeing errors and it is confusing me as to why it would be attempting to connect to other machines as seen in the log. Some Output of cat /var/log/mail.log|grep 3F1AB42132 Jul 5 02:04:01 relay postfix/smtpd[1653]: 3F1AB42132: client=unknown[10.41.0.109] Jul 5 02:04:01 relay postfix/cleanup[1655]: 3F1AB42132: message-id= Jul 5 02:04:01 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:04:31 relay postfix/smtp[1634]: 3F1AB42132: to=, relay=none, delay=30, delays=0.02/0/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.110]:25: Connection refused) Jul 5 02:13:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:14:28 relay postfix/smtp[1681]: 3F1AB42132: to=, relay=none, delay=628, delays=598/0.01/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.247]:25: Connection refused) Jul 5 02:28:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:29:28 relay postfix/smtp[1684]: 3F1AB42132: to=, relay=none, delay=1527, delays=1497/0/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.135]:25: Connection refused) Jul 5 02:58:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:59:28 relay postfix/smtp[1739]: 3F1AB42132: to=, relay=none, delay=3327, delays=3297/0/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.40.40.130]:25: Connection timed out) Jul 5 03:58:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 03:59:28 relay postfix/smtp[1839]: 3F1AB42132: to=, relay=none, delay=6928, delays=6897/0.03/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 04:11:03 relay postfix/qmgr[2039]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 04:11:33 relay postfix/error[2093]: 3F1AB42132: to=, relay=none, delay=7653, delays=7622/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 05:21:03 relay postfix/qmgr[2039]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 05:21:33 relay postfix/error[2217]: 3F1AB42132: to=, relay=none, delay=11853, delays=11822/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 06:29:25 relay postfix/qmgr[2420]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 06:29:55 relay postfix/error[2428]: 3F1AB42132: to=, relay=none, delay=15954, delays=15924/30/0/0.08, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 07:39:24 relay postfix/qmgr[2885]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 07:39:54 relay postfix/error[2936]: 3F1AB42132: to=, relay=none, delay=20153, delays=20123/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out)

    Read the article

  • lxc containers hangs after upgrade to 13.10

    - by doug123
    I have 3 lxc containers. They were all working fine on 12.10 and I upgraded the containers with do-release-upgrade on the containers to 13.04 and 13.10 and that worked great. Then I upgraded the host to 13.04 and then 13.10 and now the 3 containers hang with this: >lxc-start -n as1 -l DEBUG -o $(tty) lxc-start 1383145786.513 INFO lxc_start_ui - using rcfile /var/lib/lxc/as1/config lxc-start 1383145786.513 WARN lxc_log - lxc_log_init called with log already initialized lxc-start 1383145786.513 INFO lxc_apparmor - aa_enabled set to 1 lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/13' (7/8) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/14' (9/10) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/15' (11/12) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/17' (13/14) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/18' (15/16) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/19' (17/18) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/20' (19/20) lxc-start 1383145786.514 INFO lxc_conf - tty's configured lxc-start 1383145786.514 DEBUG lxc_start - sigchild handler set lxc-start 1383145786.514 DEBUG lxc_console - opening /dev/tty for console peer lxc-start 1383145786.514 DEBUG lxc_console - using '/dev/tty' as console lxc-start 1383145786.514 DEBUG lxc_console - 6242 got SIGWINCH fd 25 lxc-start 1383145786.514 DEBUG lxc_console - set winsz dstfd:22 cols:177 rows:53 lxc-start 1383145786.514 INFO lxc_start - 'as1' is initialized lxc-start 1383145786.522 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp lxc-start 1383145786.524 DEBUG lxc_conf - mac address of host interface 'vethB4L35W' changed to private fe:7c:96:a0:ae:29 lxc-start 1383145786.525 DEBUG lxc_conf - instanciated veth 'vethB4L35W/vethVC61K2', index is '26' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.529 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.555 DEBUG lxc_conf - move 'eth0' to '6249' lxc-start 1383145786.555 INFO lxc_conf - 'as1' hostname has been setup lxc-start 1383145786.575 DEBUG lxc_conf - 'eth0' has been setup lxc-start 1383145786.575 INFO lxc_conf - network has been setup lxc-start 1383145786.575 INFO lxc_conf - looking at .44 42 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /. lxc-start 1383145786.575 INFO lxc_conf - looking at .52 44 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.575 INFO lxc_conf - looking at .61 52 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.575 INFO lxc_conf - looking at .68 44 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run. lxc-start 1383145786.575 INFO lxc_conf - looking at .69 68 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.575 INFO lxc_conf - looking at .72 68 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.575 INFO lxc_conf - looking at .73 68 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.575 INFO lxc_conf - looking at .76 44 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.575 INFO lxc_conf - looking at .77 76 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.575 INFO lxc_conf - looking at .78 77 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.575 INFO lxc_conf - looking at .79 77 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.575 INFO lxc_conf - looking at .80 77 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.575 INFO lxc_conf - looking at .81 77 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.575 INFO lxc_conf - looking at .82 77 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.575 INFO lxc_conf - looking at .83 77 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.575 INFO lxc_conf - looking at .84 77 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.575 INFO lxc_conf - looking at .85 77 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.575 INFO lxc_conf - looking at .94 77 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.575 INFO lxc_conf - looking at .95 77 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.575 INFO lxc_conf - looking at .96 76 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.575 INFO lxc_conf - looking at .98 76 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.575 INFO lxc_conf - looking at .101 76 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.575 INFO lxc_conf - looking at .102 76 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.575 INFO lxc_conf - looking at .103 44 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.575 INFO lxc_conf - looking at .104 44 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /data. lxc-start 1383145786.575 INFO lxc_conf - looking at .105 44 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.575 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.576 DEBUG lxc_conf - mounted '/data/srv/lxc/as1' on '/usr/lib/x86_64-linux-gnu/lxc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//dev/pts', type 'devpts' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//proc', type 'proc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//sys', type 'sysfs' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//run', type 'tmpfs' lxc-start 1383145786.576 INFO lxc_conf - mount points have been setup lxc-start 1383145786.577 INFO lxc_conf - console has been setup lxc-start 1383145786.577 INFO lxc_conf - 8 tty(s) has been setup lxc-start 1383145786.577 INFO lxc_conf - rootfs path is ./data/srv/lxc/as1., mount is ./usr/lib/x86_64-linux-gnu/lxc. lxc-start 1383145786.577 INFO lxc_apparmor - I am 1, /proc/self points to 1 lxc-start 1383145786.577 DEBUG lxc_conf - created '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' directory lxc-start 1383145786.577 DEBUG lxc_conf - mountpoint for old rootfs is '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' lxc-start 1383145786.577 DEBUG lxc_conf - pivot_root syscall to '/usr/lib/x86_64-linux-gnu/lxc' successful lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev/pts' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/lock' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/shm' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/user' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuset' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpu' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuacct' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/memory' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/devices' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/freezer' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/blkio' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/perf_event' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/hugetlb' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/systemd' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/fuse/connections' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/debug' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/security' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/pstore' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/proc' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/data' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/boot' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold' lxc-start 1383145786.577 INFO lxc_conf - created new pts instance lxc-start 1383145786.578 DEBUG lxc_conf - drop capability 'sys_boot' (22) lxc-start 1383145786.578 DEBUG lxc_conf - capabilities have been setup lxc-start 1383145786.578 NOTICE lxc_conf - 'as1' is setup. lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.578 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.578 INFO lxc_apparmor - setting up apparmor lxc-start 1383145786.578 INFO lxc_apparmor - changed apparmor profile to lxc-container-default lxc-start 1383145786.578 NOTICE lxc_start - exec'ing '/sbin/init' lxc-start 1383145786.578 INFO lxc_conf - looking at .15 20 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.578 INFO lxc_conf - looking at .16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.578 INFO lxc_conf - looking at .17 20 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.578 INFO lxc_conf - looking at .18 17 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.578 INFO lxc_conf - looking at .19 20 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /run. lxc-start 1383145786.578 INFO lxc_conf - looking at .20 1 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.578 INFO lxc_conf - now p is . /. lxc-start 1383145786.578 INFO lxc_conf - looking at .22 15 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.578 INFO lxc_conf - looking at .23 15 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.578 INFO lxc_conf - looking at .24 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.579 INFO lxc_conf - looking at .25 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.579 INFO lxc_conf - looking at .26 19 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.579 INFO lxc_conf - looking at .27 19 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.579 INFO lxc_conf - looking at .28 22 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.579 INFO lxc_conf - looking at .29 19 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.579 INFO lxc_conf - looking at .30 15 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.579 INFO lxc_conf - looking at .31 22 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.579 INFO lxc_conf - looking at .32 22 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.579 INFO lxc_conf - looking at .33 22 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.579 INFO lxc_conf - looking at .34 22 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.579 INFO lxc_conf - looking at .35 22 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.579 INFO lxc_conf - looking at .36 22 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.579 INFO lxc_conf - looking at .37 22 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.579 INFO lxc_conf - looking at .38 22 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.579 INFO lxc_conf - looking at .39 20 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.579 INFO lxc_conf - now p is . /data. lxc-start 1383145786.579 INFO lxc_conf - looking at .40 20 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.579 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.579 INFO lxc_conf - looking at .41 22 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.579 NOTICE lxc_start - '/sbin/init' started with pid '6249' lxc-start 1383145786.579 WARN lxc_start - invalid pid for SIGCHLD <4>init: ureadahead main process (7) terminated with status 5 <4>init: console-font main process (94) terminated with status 1 And it will just sit there like that for hours at least. The container becomes pingable but I can't ssh and if I try lxc-console -n as1 I get a blank screen. If I do lxc-stop -n as1 or ^C in the window where it has hung I get: ^CTERM environment variable not set. <4>init: plymouth-upstart-bridge main process (192) terminated with status 1 <4>init: hwclock-save main process (187) terminated with status 70 * Asking all remaining processes to terminate... ...done. * All processes ended within 1 seconds... ...done. * Deactivating swap... ...fail! mount: cannot mount block device /dev/md2 read-only * Will now restart But after 20 minutes it hasn't restarted. Any ideas why these containers are hanging?

    Read the article

  • obiee memory usage

    - by user554629
    Heap memory is a frequent customer topic. Here's the quick refresher, oriented towards AIX, but the principles apply to other unix implementations. 1. 32-bit processes have a maximum addressability of 4GB; usable application heap size of 2-3 GB.  On AIX it is controlled by an environment variable: export LDR_CNTRL=....=MAXDATA=0x080000000   # 2GB ( The leading zero is deliberate, not required )   1a. It is  possible to get 3.25GB  heap size for a 32-bit process using @DSA (Discontiguous Segment Allocation)     export LDR_CNTRL=MAXDATA=0xd0000000@DSA  # 3.25 GB 32-bit only        One side-effect of using AIX segments "c" and "d" is that shared libraries will be loaded privately, and not shared.        If you need the additional heap space, this is worth the trade-off.  This option is frequently used for 32-bit java.   1b. 64-bit processes have no need for the @DSA option. 2. 64-bit processes can double the 32-bit heap size to 4GB using: export LDR_CNTRL=....=MAXDATA=0x100000000  # 1 with 8-zeros    2a. But this setting would place the same memory limitations on obiee as a 32-bit process    2b. The major benefit of 64-bit is to break the binds of 32-bit addressing.  At a minimum, use 8GB export LDR_CNTRL=....=MAXDATA=0x200000000  # 2 with 8-zeros    2c.  Many large customers are providing extra safety to their servers by using 16GB: export LDR_CNTRL=....=MAXDATA=0x400000000  # 4 with 8-zeros There is no performance penalty for providing virtual memory allocations larger than required by the application.  - If the server only uses 2GB of space in 64-bit ... specifying 16GB just provides an upper bound cushion.    When an unexpected user query causes a sudden memory surge, the extra memory keeps the server running. 3.  The next benefit to 64-bit is that you can provide huge thread stack sizes for      strange queries that might otherwise crash the server.      nqsserver uses fast recursive algorithms to traverse complicated control structures.    This means lots of thread space to hold the stack frames.    3a. Stack frames mostly contain register values;  64-bit registers are twice as large as 32-bit          At a minimum you should  quadruple the size of the server stack threads in NQSConfig.INI          when migrating from 32- to 64-bit, to prevent a rogue query from crashing the server.           Allocate more than is normally necessary for safety.    3b. There is no penalty for allocating more stack size than you need ...           it is just virtual memory;   no real resources  are consumed until the extra space is needed.    3c. Increasing thread stack sizes may require the process heap size (MAXDATA) to be increased.          Heap space is used for dynamic memory requests, and for thread stacks.          No performance penalty to run with large heap and thread stack sizes.           In a 32-bit world, this safety would require careful planning to avoid exceeding 2GM usable storage.     3d. Increasing the number of threads also may require additional heap storage.          Most thread stack frames on obiee are allocated when the server is started,          and the real memory usage increases as threads run work. Does 2.8GB sound like a lot of memory for an AIX application server? - I guess it is what you are accustomed to seeing from "grandpa's applications". - One of the primary design goals of obiee is to trade memory for services ( db, query caches, etc) - 2.8GB is still well under the 4GB heap size allocated with MAXDATA=0x100000000 - 2.8GB process size is also possible even on 32-bit Windows applications - It is not unusual to receive a sudden request for 30MB of contiguous storage on obiee.- This is not a memory leak;  eventually the nqsserver storage will stabilize, but it may take days to do so. vmstat is the tool of choice to observe memory usage.  On AIX vmstat will show  something that may be  startling to some people ... that available free memory ( the 2nd column ) is always  trending toward zero ... no available free memory.  Some customers have concluded that "nearly zero memory free" means it is time to upgrade the server with more real memory.   After the upgrade, the server again shows very little free memory available. Should you be concerned about this?   Many customers are !!  Here is what is happening: - AIX filesystems are built on a paging model.   If you read/write a  filesystem block it is paged into memory ( no read/write system calls ) - This filesystem "page" has its own "backing store" on disk, the original filesystem block.   When the system needs the real memory page holding the file block, there is no need to "page out".    The page can be stolen immediately, because the original is still on disk in the filesystem. - The filesystem  pages tend to collect ... every filesystem block that was ever seen since    system boot is available in memory.  If another application needs the file block, it is retrieved with no physical I/O. What happens if the system does need the memory ... to satisfy a 30MB heap request by nqsserver, for example? - Since the filesystem blocks have their own backing store ( not on a paging device )   the kernel can just steal any filesystem block ... on a least-recently-used basis   to satisfy a new real memory request for "computation pages". No cause for alarm.   vmstat is accurately displaying whether all filesystem blocks have been touched, and now reside in memory.   Back to nqsserver:  when should you be worried about its memory footprint? Answer:  Almost never.   Stop monitoring it ... stop fussing over it ... stop trying to optimize it. This is a production application, and nqsserver uses the memory it requires to accomplish the job, based on demand. C'mon ... never worry?   I'm from New York ... worry is what we do best. Ok, here is the metric you should be watching, using vmstat: - Are you paging ... there are several columns of vmstat outputbash-2.04$ vmstat 3 3 System configuration: lcpu=4 mem=4096MB kthr    memory              page              faults        cpu    ----- ------------ ------------------------ ------------ -----------  r  b    avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa  0  0 208492  2600   0   0   0   0    0   0  13   45  73  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   12  77  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   40  86  0  0 99  0 avm is the "available free memory" indicator that trends toward zerore   is "re-page".  The kernel steals a real memory page for one process;  immediately repages back to original processpi  "page in".   A process memory page previously paged out, now paged back in because the process needs itpo "page out" A process memory block was paged out, because it was needed by some other process Light paging activity ( re, pi, po ) is not a concern for worry.   Processes get started, need some memory, go away. Sustained paging activity  is cause for concern.   obiee users are having a terrible day if these counters are always changing. Hang on ... if nqsserver needs that memory and I reduce MAXDATA to keep the process under control, won't the nqsserver process crash when the memory is needed? Yes it will.   It means that nqsserver is configured to require too much memory and there are  lots of options to reduce the real memory requirement.  - number of threads  - size of query cache  - size of sort But I need nqsserver to keep running. Real memory is over-committed.    Many things can cause this:- running all application processes on a single server    ... DB server, web servers, WebLogic/WebSphere, sawserver, nqsserver, etc.   You could move some of those to another host machine and communicate over the network  The need for real memory doesn't go away, it's just distributed to other host machines. - AIX LPAR is configured with too little memory.     The AIX admin needs to provide more real memory to the LPAR running obiee. - More memory to this LPAR affects other partitions. Then it's time to visit your friendly IBM rep and buy more memory.

    Read the article

  • evaluating a code of a graph [migrated]

    - by mazen.r.f
    This is relatively a long code,if you have the tolerance and the will to find out how to make this code work then take a look please, i will appreciate your feed back. i have spent two days trying to come up with a code to represent a graph , then calculate the shortest path using dijkastra algorithm , but i am not able to get the right result , even the code runs without errors , but the result is not correct , always i am getting 0. briefly,i have three classes , Vertex, Edge, Graph , the Vertex class represents the nodes in the graph and it has id and carried ( which carry the weight of the links connected to it while using dijkastra algorithm ) and a vector of the ids belong to other nodes the path will go through before arriving to the node itself , this vector is named previous_nodes. the Edge class represents the edges in the graph it has two vertices ( one in each side ) and a wight ( the distance between the two vertices ). the Graph class represents the graph , it has two vectors one is the vertices included in this graph , and the other is the edges included in the graph. inside the class Graph there is a method its name shortest takes the sources node id and the destination and calculates the shortest path using dijkastra algorithm, and i think that it is the most important part of the code. my theory about the code is that i will create two vectors one for the vertices in the graph i will name it vertices and another vector its name is ver_out it will include the vertices out of calculation in the graph, also i will have two vectors of type Edge , one its name edges for all the edges in the graph and the other its name is track to contain temporarily the edges linked to the temporarily source node in every round , after the calculation of every round the vector track will be cleared. in main() i created five vertices and 10 edges to simulate a graph , the result of the shortest path supposedly to be 4 , but i am always getting 0 , that means i am having something wrong in my code , so if you are interesting in helping me find my mistake and how to make the code work , please take a look. the way shortest work is as follow at the beginning all the edges will be included in the vector edges , we select the edges related to the source and put them in the vector track , then we iterate through track and add the wight of every edge to the vertex (node ) related to it ( not the source vertex ) , then after we clear track and remove the source vertex from the vector vertices and select a new source , and start over again select the edges related to the new source , put them in track , iterate over edges in tack , adding the weights to the corresponding vertices then remove this vertex from the vector vertices, and clear track , and select a new source , and so on . here is the code. #include<iostream> #include<vector> #include <stdlib.h> // for rand() using namespace std; class Vertex { private: unsigned int id; // the name of the vertex unsigned int carried; // the weight a vertex may carry when calculating shortest path vector<unsigned int> previous_nodes; public: unsigned int get_id(){return id;}; unsigned int get_carried(){return carried;}; void set_id(unsigned int value) {id = value;}; void set_carried(unsigned int value) {carried = value;}; void previous_nodes_update(unsigned int val){previous_nodes.push_back(val);}; void previous_nodes_erase(unsigned int val){previous_nodes.erase(previous_nodes.begin() + val);}; Vertex(unsigned int init_val = 0, unsigned int init_carried = 0) :id (init_val), carried(init_carried) // constructor { } ~Vertex() {}; // destructor }; class Edge { private: Vertex first_vertex; // a vertex on one side of the edge Vertex second_vertex; // a vertex on the other side of the edge unsigned int weight; // the value of the edge ( or its weight ) public: unsigned int get_weight() {return weight;}; void set_weight(unsigned int value) {weight = value;}; Vertex get_ver_1(){return first_vertex;}; Vertex get_ver_2(){return second_vertex;}; void set_first_vertex(Vertex v1) {first_vertex = v1;}; void set_second_vertex(Vertex v2) {second_vertex = v2;}; Edge(const Vertex& vertex_1 = 0, const Vertex& vertex_2 = 0, unsigned int init_weight = 0) : first_vertex(vertex_1), second_vertex(vertex_2), weight(init_weight) { } ~Edge() {} ; // destructor }; class Graph { private: std::vector<Vertex> vertices; std::vector<Edge> edges; public: Graph(vector<Vertex> ver_vector, vector<Edge> edg_vector) : vertices(ver_vector), edges(edg_vector) { } ~Graph() {}; vector<Vertex> get_vertices(){return vertices;}; vector<Edge> get_edges(){return edges;}; void set_vertices(vector<Vertex> vector_value) {vertices = vector_value;}; void set_edges(vector<Edge> vector_ed_value) {edges = vector_ed_value;}; unsigned int shortest(unsigned int src, unsigned int dis) { vector<Vertex> ver_out; vector<Edge> track; for(unsigned int i = 0; i < edges.size(); ++i) { if((edges[i].get_ver_1().get_id() == vertices[src].get_id()) || (edges[i].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[i]); edges.erase(edges.begin()+i); } }; for(unsigned int i = 0; i < track.size(); ++i) { if(track[i].get_ver_1().get_id() != vertices[src].get_id()) { track[i].get_ver_1().set_carried((track[i].get_weight()) + track[i].get_ver_2().get_carried()); track[i].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else { track[i].get_ver_2().set_carried((track[i].get_weight()) + track[i].get_ver_1().get_carried()); track[i].get_ver_2().previous_nodes_update(vertices[src].get_id()); } } for(unsigned int i = 0; i < vertices.size(); ++i) if(vertices[i].get_id() == src) vertices.erase(vertices.begin() + i); // removing the sources vertex from the vertices vector ver_out.push_back (vertices[src]); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int i = 0; i < vertices.size(); ++i) if((vertices[i].get_carried() < vertices[src].get_carried()) && (vertices[i].get_id() != dis)) src = vertices[i].get_id(); //while(!edges.empty()) for(unsigned int round = 0; round < vertices.size(); ++round) { for(unsigned int k = 0; k < edges.size(); ++k) { if((edges[k].get_ver_1().get_id() == vertices[src].get_id()) || (edges[k].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[k]); edges.erase(edges.begin()+k); } }; for(unsigned int n = 0; n < track.size(); ++n) if((track[n].get_ver_1().get_id() != vertices[src].get_id()) && (track[n].get_ver_1().get_carried() > (track[n].get_ver_2().get_carried() + track[n].get_weight()))) { track[n].get_ver_1().set_carried((track[n].get_weight()) + track[n].get_ver_2().get_carried()); track[n].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else if(track[n].get_ver_2().get_carried() > (track[n].get_ver_1().get_carried() + track[n].get_weight())) { track[n].get_ver_2().set_carried((track[n].get_weight()) + track[n].get_ver_1().get_carried()); track[n].get_ver_2().previous_nodes_update(vertices[src].get_id()); } for(unsigned int t = 0; t < vertices.size(); ++t) if(vertices[t].get_id() == src) vertices.erase(vertices.begin() + t); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int tt = 0; tt < edges.size(); ++tt) { if(vertices[tt].get_carried() < vertices[src].get_carried()) { src = vertices[tt].get_id(); } } } return vertices[dis].get_carried(); } }; int main() { cout<< "Hello, This is a graph"<< endl; vector<Vertex> vers(5); vers[0].set_id(0); vers[1].set_id(1); vers[2].set_id(2); vers[3].set_id(3); vers[4].set_id(4); vector<Edge> eds(10); eds[0].set_first_vertex(vers[0]); eds[0].set_second_vertex(vers[1]); eds[0].set_weight(5); eds[1].set_first_vertex(vers[0]); eds[1].set_second_vertex(vers[2]); eds[1].set_weight(9); eds[2].set_first_vertex(vers[0]); eds[2].set_second_vertex(vers[3]); eds[2].set_weight(4); eds[3].set_first_vertex(vers[0]); eds[3].set_second_vertex(vers[4]); eds[3].set_weight(6); eds[4].set_first_vertex(vers[1]); eds[4].set_second_vertex(vers[2]); eds[4].set_weight(2); eds[5].set_first_vertex(vers[1]); eds[5].set_second_vertex(vers[3]); eds[5].set_weight(5); eds[6].set_first_vertex(vers[1]); eds[6].set_second_vertex(vers[4]); eds[6].set_weight(7); eds[7].set_first_vertex(vers[2]); eds[7].set_second_vertex(vers[3]); eds[7].set_weight(1); eds[8].set_first_vertex(vers[2]); eds[8].set_second_vertex(vers[4]); eds[8].set_weight(8); eds[9].set_first_vertex(vers[3]); eds[9].set_second_vertex(vers[4]); eds[9].set_weight(3); unsigned int path; Graph graf(vers, eds); path = graf.shortest(2, 4); cout<< path << endl; return 0; }

    Read the article

  • Evaluating code for a graph [migrated]

    - by mazen.r.f
    This is relatively long code. Please take a look at this code if you are still willing to do so. I will appreciate your feedback. I have spent two days trying to come up with code to represent a graph, calculating the shortest path using Dijkstra's algorithm. But I am not able to get the right result, even though the code runs without errors. The result is not correct and I am always getting 0. I have three classes: Vertex, Edge, and Graph. The Vertex class represents the nodes in the graph and it has id and carried (which carry the weight of the links connected to it while using Dijkstra's algorithm) and a vector of the ids belong to other nodes the path will go through before arriving to the node itself. This vector is named previous_nodes. The Edge class represents the edges in the graph and has two vertices (one in each side) and a width (the distance between the two vertices). The Graph class represents the graph. It has two vectors, where one is the vertices included in this graph, and the other is the edges included in the graph. Inside the class Graph, there is a method named shortest() that takes the sources node id and the destination and calculates the shortest path using Dijkstra's algorithm. I think that it is the most important part of the code. My theory about the code is that I will create two vectors, one for the vertices in the graph named vertices, and another vector named ver_out (it will include the vertices out of calculation in the graph). I will also have two vectors of type Edge, where one is named edges (for all the edges in the graph), and the other is named track (to temporarily contain the edges linked to the temporary source node in every round). After the calculation of every round, the vector track will be cleared. In main(), I've created five vertices and 10 edges to simulate a graph. The result of the shortest path supposedly is 4, but I am always getting 0. That means I have something wrong in my code. If you are interesting in helping me find my mistake and making the code work, please take a look. The way shortest work is as follow: at the beginning, all the edges will be included in the vector edges. We select the edges related to the source and put them in the vector track, then we iterate through track and add the width of every edge to the vertex (node) related to it (not the source vertex). After that, we clear track and remove the source vertex from the vector vertices and select a new source. Then we start over again and select the edges related to the new source, put them in track, iterate over edges in track, adding the weights to the corresponding vertices, then remove this vertex from the vector vertices. Then clear track, and select a new source, and so on. #include<iostream> #include<vector> #include <stdlib.h> // for rand() using namespace std; class Vertex { private: unsigned int id; // the name of the vertex unsigned int carried; // the weight a vertex may carry when calculating shortest path vector<unsigned int> previous_nodes; public: unsigned int get_id(){return id;}; unsigned int get_carried(){return carried;}; void set_id(unsigned int value) {id = value;}; void set_carried(unsigned int value) {carried = value;}; void previous_nodes_update(unsigned int val){previous_nodes.push_back(val);}; void previous_nodes_erase(unsigned int val){previous_nodes.erase(previous_nodes.begin() + val);}; Vertex(unsigned int init_val = 0, unsigned int init_carried = 0) :id (init_val), carried(init_carried) // constructor { } ~Vertex() {}; // destructor }; class Edge { private: Vertex first_vertex; // a vertex on one side of the edge Vertex second_vertex; // a vertex on the other side of the edge unsigned int weight; // the value of the edge ( or its weight ) public: unsigned int get_weight() {return weight;}; void set_weight(unsigned int value) {weight = value;}; Vertex get_ver_1(){return first_vertex;}; Vertex get_ver_2(){return second_vertex;}; void set_first_vertex(Vertex v1) {first_vertex = v1;}; void set_second_vertex(Vertex v2) {second_vertex = v2;}; Edge(const Vertex& vertex_1 = 0, const Vertex& vertex_2 = 0, unsigned int init_weight = 0) : first_vertex(vertex_1), second_vertex(vertex_2), weight(init_weight) { } ~Edge() {} ; // destructor }; class Graph { private: std::vector<Vertex> vertices; std::vector<Edge> edges; public: Graph(vector<Vertex> ver_vector, vector<Edge> edg_vector) : vertices(ver_vector), edges(edg_vector) { } ~Graph() {}; vector<Vertex> get_vertices(){return vertices;}; vector<Edge> get_edges(){return edges;}; void set_vertices(vector<Vertex> vector_value) {vertices = vector_value;}; void set_edges(vector<Edge> vector_ed_value) {edges = vector_ed_value;}; unsigned int shortest(unsigned int src, unsigned int dis) { vector<Vertex> ver_out; vector<Edge> track; for(unsigned int i = 0; i < edges.size(); ++i) { if((edges[i].get_ver_1().get_id() == vertices[src].get_id()) || (edges[i].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[i]); edges.erase(edges.begin()+i); } }; for(unsigned int i = 0; i < track.size(); ++i) { if(track[i].get_ver_1().get_id() != vertices[src].get_id()) { track[i].get_ver_1().set_carried((track[i].get_weight()) + track[i].get_ver_2().get_carried()); track[i].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else { track[i].get_ver_2().set_carried((track[i].get_weight()) + track[i].get_ver_1().get_carried()); track[i].get_ver_2().previous_nodes_update(vertices[src].get_id()); } } for(unsigned int i = 0; i < vertices.size(); ++i) if(vertices[i].get_id() == src) vertices.erase(vertices.begin() + i); // removing the sources vertex from the vertices vector ver_out.push_back (vertices[src]); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int i = 0; i < vertices.size(); ++i) if((vertices[i].get_carried() < vertices[src].get_carried()) && (vertices[i].get_id() != dis)) src = vertices[i].get_id(); //while(!edges.empty()) for(unsigned int round = 0; round < vertices.size(); ++round) { for(unsigned int k = 0; k < edges.size(); ++k) { if((edges[k].get_ver_1().get_id() == vertices[src].get_id()) || (edges[k].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[k]); edges.erase(edges.begin()+k); } }; for(unsigned int n = 0; n < track.size(); ++n) if((track[n].get_ver_1().get_id() != vertices[src].get_id()) && (track[n].get_ver_1().get_carried() > (track[n].get_ver_2().get_carried() + track[n].get_weight()))) { track[n].get_ver_1().set_carried((track[n].get_weight()) + track[n].get_ver_2().get_carried()); track[n].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else if(track[n].get_ver_2().get_carried() > (track[n].get_ver_1().get_carried() + track[n].get_weight())) { track[n].get_ver_2().set_carried((track[n].get_weight()) + track[n].get_ver_1().get_carried()); track[n].get_ver_2().previous_nodes_update(vertices[src].get_id()); } for(unsigned int t = 0; t < vertices.size(); ++t) if(vertices[t].get_id() == src) vertices.erase(vertices.begin() + t); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int tt = 0; tt < edges.size(); ++tt) { if(vertices[tt].get_carried() < vertices[src].get_carried()) { src = vertices[tt].get_id(); } } } return vertices[dis].get_carried(); } }; int main() { cout<< "Hello, This is a graph"<< endl; vector<Vertex> vers(5); vers[0].set_id(0); vers[1].set_id(1); vers[2].set_id(2); vers[3].set_id(3); vers[4].set_id(4); vector<Edge> eds(10); eds[0].set_first_vertex(vers[0]); eds[0].set_second_vertex(vers[1]); eds[0].set_weight(5); eds[1].set_first_vertex(vers[0]); eds[1].set_second_vertex(vers[2]); eds[1].set_weight(9); eds[2].set_first_vertex(vers[0]); eds[2].set_second_vertex(vers[3]); eds[2].set_weight(4); eds[3].set_first_vertex(vers[0]); eds[3].set_second_vertex(vers[4]); eds[3].set_weight(6); eds[4].set_first_vertex(vers[1]); eds[4].set_second_vertex(vers[2]); eds[4].set_weight(2); eds[5].set_first_vertex(vers[1]); eds[5].set_second_vertex(vers[3]); eds[5].set_weight(5); eds[6].set_first_vertex(vers[1]); eds[6].set_second_vertex(vers[4]); eds[6].set_weight(7); eds[7].set_first_vertex(vers[2]); eds[7].set_second_vertex(vers[3]); eds[7].set_weight(1); eds[8].set_first_vertex(vers[2]); eds[8].set_second_vertex(vers[4]); eds[8].set_weight(8); eds[9].set_first_vertex(vers[3]); eds[9].set_second_vertex(vers[4]); eds[9].set_weight(3); unsigned int path; Graph graf(vers, eds); path = graf.shortest(2, 4); cout<< path << endl; return 0; }

    Read the article

  • How-to read data from selected tree node

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By default, the SelectionListener property of an ADF bound tree points to the makeCurrent method of the FacesCtrlHierBinding class in ADF to synchronize the current row in the ADF binding layer with the selected tree node. To customize the selection behavior, or just to read the selected node value in Java, you override the default configuration with an EL string pointing to a managed bean method property. In the following I show how you change the selection listener while preserving the default ADF selection behavior. To change the SelectionListener, select the tree component in the Structure Window and open the Oracle JDeveloper Property Inspector. From the context menu, select the Edit option to create a new listener method in a new or an existing managed bean. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For this example, I created a new managed bean. On tree node select, the managed bean code prints the selected tree node value(s) import java.util.List; import javax.el.ELContext; import javax.el.ExpressionFactory; import javax.el.MethodExpression; import javax.faces.application.Application; import javax.faces.context.FacesContext; import java.util.Iterator; import oracle.adf.view.rich.component.rich.data.RichTree; import oracle.jbo.Row; import oracle.jbo.uicli.binding.JUCtrlHierBinding; import oracle.jbo.uicli.binding.JUCtrlHierNodeBinding; import org.apache.myfaces.trinidad.event.SelectionEvent; import org.apache.myfaces.trinidad.model.CollectionModel; import org.apache.myfaces.trinidad.model.RowKeySet; import org.apache.myfaces.trinidad.model.TreeModel; public class TreeSampleBean { public TreeSampleBean() {} public void onTreeSelect(SelectionEvent selectionEvent) { //original selection listener set by ADF //#{bindings.allDepartments.treeModel.makeCurrent} String adfSelectionListener = "#{bindings.allDepartments.treeModel.makeCurrent}";   //make sure the default selection listener functionality is //preserved. you don't need to do this for multi select trees //as the ADF binding only supports single current row selection     /* START PRESERVER DEFAULT ADF SELECT BEHAVIOR */ FacesContext fctx = FacesContext.getCurrentInstance(); Application application = fctx.getApplication(); ELContext elCtx = fctx.getELContext(); ExpressionFactory exprFactory = application.getExpressionFactory();   MethodExpression me = null;   me = exprFactory.createMethodExpression(elCtx, adfSelectionListener,                                           Object.class, newClass[]{SelectionEvent.class});   me.invoke(elCtx, new Object[] { selectionEvent });     /* END PRESERVER DEFAULT ADF SELECT BEHAVIOR */   RichTree tree = (RichTree)selectionEvent.getSource(); TreeModel model = (TreeModel)tree.getValue();  //get selected nodes RowKeySet rowKeySet = selectionEvent.getAddedSet();   Iterator rksIterator = rowKeySet.iterator();   //for single select configurations,this only is called once   while (rksIterator.hasNext()) {     List key = (List)rksIterator.next();     JUCtrlHierBinding treeBinding = null;     CollectionModel collectionModel = (CollectionModel)tree.getValue();     treeBinding = (JUCtrlHierBinding)collectionModel.getWrappedData();     JUCtrlHierNodeBinding nodeBinding = null;     nodeBinding = treeBinding.findNodeByKeyPath(key);     Row rw = nodeBinding.getRow();     //print first row attribute. Note that in a tree you have to     //determine the node type if you want to select node attributes     //by name and not index      String rowType = rw.getStructureDef().getDefName();       if(rowType.equalsIgnoreCase("DepartmentsView")){      System.out.println("This row is a department: " +                          rw.getAttribute("DepartmentId"));     }     else if(rowType.equalsIgnoreCase("EmployeesView")){      System.out.println("This row is an employee: " +                          rw.getAttribute("EmployeeId"));     }        else{       System.out.println("Huh????");     }     // ... do more useful stuff here   } } -------------------- Download JDeveloper 11.1.2.1 Sample Workspace

    Read the article

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • ASP.Net MVC Interview Questions and Answers

    - by Samir R. Bhogayta
    About ASP.Net MVC The ASP.Net MVC is the framework provided by Microsoft that lets you develop the applications that follows the principles of Model-View-Controller (MVC) design pattern. The .Net programmers new to MVC thinks that it is similar to WebForms Model (Normal ASP.Net), but it is far different from the WebForms programming.  This article will tell you how to quick learn the basics of MVC along with some frequently asked interview questions and answers on ASP.Net MVC 1. What is ASP.Net MVC The ASP.Net MVC is the framework provided by Microsoft to achieve     separation of concerns that leads to easy maintainability of the     application. Model is supposed to handle data related activity View deals with user interface related work Controller is meant for managing the application flow by communicating between Model and View. Normal 0 false false false EN-US X-NONE X-NONE 2. Why to use ASP.Net MVC The strength of MVC (i.e. ASP.Net MVC) listed below will answer this question MVC reduces the dependency between the components; this makes your code more testable. MVC does not recommend use of server controls, hence the processing time required to generate HTML response is drastically reduced. The integration of java script libraries like jQuery, Microsoft MVC becomes easy as compared to Webforms approach. 3. What do you mean by Razor The Razor is the new View engine introduced in MVC 3.0. The View engine is responsible for processing the view files [e.g. .aspx, .cshtml] in order to generate HTML response. The previous versions of MVC were dependent on ASPX view engine.  4. Can we use ASPX view engine in latest versions of MVC Yes. The Recommended way is to prefer Razor View 5. What are the benefits of Razor View?      The syntax for server side code is simplified      The length of code is drastically reduced      Razor syntax is easy to learn and reduces the complexity Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 6. What is the extension of Razor View file? .cshtml (for c#) and .vbhtml (for vb) 7. How to create a Controller in MVC Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Create a simple class and extend it from Controller class. The bare minimum requirement for a class to become a controller is to inherit it from ControllerBase is the class that is required to inherit to create the controller but Controller class inherits from ControllerBase. 8. How to create an Action method in MVC Add a simple method inside a controller class with ActionResult return type. 9. How to access a view on the server    The browser generates the request in which the information like Controller name, Action Name and Parameters are provided, when server receives this URL it resolves the Name of Controller and Action, after that it calls the specified action with provided parameters. Action normally does some processing and returns the ViewResult by specifying the view name (blank name searches according to naming conventions).   10. What is the default Form method (i.e. GET, POST) for an action method GET. To change this you can add an action level attribute e.g [HttpPost] Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 11. What is a Filter in MVC? When user (browser) sends a request to server an action method of a controller gets invoked; sometimes you may require executing a custom code before or after action method gets invoked, this custom code is called as Filter. 12. What are the different types of Filters in MVC? a. Authorization filter b. Action filter c. Result filter d. Exception filter [Do not forget the order mentioned above as filters gets executed as per above mentioned sequence] 13. Explain the use of Filter with an example? Suppose you are working on a MVC application where URL is sent in an encrypted format instead of a plain text, once encrypted URL is received by server it will ignore action parameters because of URL encryption. To solve this issue you can create global action filter by overriding OnActionExecuting method of controller class, in this you can extract the action parameters from the encrypted URL and these parameters can be set on filterContext to send plain text parameters to the actions.     14. What is a HTML helper? A HTML helper is a method that returns string; return string usually is the HTML tag. The standard HTML helpers (e.g. Html.BeginForm(),Html.TextBox()) available in MVC are lightweight as it does not rely on event model or view state as that of in ASP.Net server controls.

    Read the article

  • How do I implement SkyBox in xna 4.0 Reach Profile (for Windows Phone 7)?

    - by Biny
    I'm trying to Implement SkyBox in my phone game. Most of the samples in the web are for HiDef profile, and they are using custom effects (that not supported on Windows Phone). I've tried to follow this guide. But for some reason my SkyBox is not rendered. This is my SkyBox class: using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Rocuna.Core; using Rocuna.GameEngine.Graphics; using Rocuna.GameEngine.Graphics.Components; namespace Rocuna.GameEngine.Extension.WP7.Graphics { /// <summary> /// Sky box element for phone games. /// </summary> public class SkyBox : SkyBoxBase { /// <summary> /// Initializes a new instance of the <see cref="SkyBoxBase"/> class. /// </summary> /// <param name="game">The Game that the game component should be attached to.</param> public SkyBox(TextureCube cube, Game game) : base(game) { Cube = cube; CubeFaces = new Texture2D[6]; PositionOffset = new Vector3(20, 20, 20); CreateGraphic(512); StripTexturesFromCube(); InitializeData(Game.GraphicsDevice); } #region Properties /// <summary> /// Gets or sets the position offset. /// </summary> /// <value> /// The position offset. /// </value> public Vector3 PositionOffset { get; set; } /// <summary> /// Gets or sets the position. /// </summary> /// <value> /// The position. /// </value> public Vector3 Position { get; set; } /// <summary> /// Gets or sets the cube. /// </summary> /// <value> /// The cube. /// </value> public TextureCube Cube { get; set; } /// <summary> /// Gets or sets the pixel array. /// </summary> /// <value> /// The pixel array. /// </value> public Color[] PixelArray { get; set; } /// <summary> /// Gets or sets the cube faces. /// </summary> /// <value> /// The cube faces. /// </value> public Texture2D[] CubeFaces { get; set; } /// <summary> /// Gets or sets the vertex buffer. /// </summary> /// <value> /// The vertex buffer. /// </value> public VertexBuffer VertexBuffer { get; set; } /// <summary> /// Gets or sets the index buffer. /// </summary> /// <value> /// The index buffer. /// </value> public IndexBuffer IndexBuffer { get; set; } /// <summary> /// Gets or sets the effect. /// </summary> /// <value> /// The effect. /// </value> public BasicEffect Effect { get; set; } #endregion protected override void LoadContent() { } public override void Update(GameTime gameTime) { var camera = Game.GetService<GraphicManager>().CurrentCamera; this.Position = camera.Position + PositionOffset; base.Update(gameTime); } public override void Draw(GameTime gameTime) { DrawOrder = int.MaxValue; var graphics = Effect.GraphicsDevice; graphics.DepthStencilState = new DepthStencilState() { DepthBufferEnable = false }; graphics.RasterizerState = new RasterizerState() { CullMode = CullMode.None }; graphics.BlendState = new BlendState(); graphics.SamplerStates[0] = SamplerState.AnisotropicClamp; graphics.SetVertexBuffer(VertexBuffer); graphics.Indices = IndexBuffer; Effect.Texture = CubeFaces[0]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 0, 2); Effect.Texture = CubeFaces[1]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 6, 2); Effect.Texture = CubeFaces[2]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 12, 2); Effect.Texture = CubeFaces[3]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 18, 2); Effect.Texture = CubeFaces[4]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 24, 2); Effect.Texture = CubeFaces[5]; Effect.CurrentTechnique.Passes[0].Apply(); graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _vertices.Count, 30, 2); base.Draw(gameTime); } #region Fields private List<VertexPositionNormalTexture> _vertices = new List<VertexPositionNormalTexture>(); private List<ushort> _indices = new List<ushort>(); #endregion #region Private methods private void InitializeData(GraphicsDevice graphicsDevice) { VertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormalTexture), _vertices.Count, BufferUsage.None); VertexBuffer.SetData<VertexPositionNormalTexture>(_vertices.ToArray()); // Create an index buffer, and copy our index data into it. IndexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), _indices.Count, BufferUsage.None); IndexBuffer.SetData<ushort>(_indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. Effect = new BasicEffect(graphicsDevice); Effect.TextureEnabled = true; Effect.EnableDefaultLighting(); } private void CreateGraphic(float size) { Vector3[] normals = { Vector3.Right, Vector3.Left, Vector3.Up, Vector3.Down, Vector3.Backward, Vector3.Forward, }; Vector2[] textureCoordinates = { Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, Vector2.UnitY, Vector2.Zero, Vector2.UnitX, Vector2.One, }; var index = 0; foreach (var normal in normals) { var side1 = new Vector3(normal.Z, normal.X, normal.Y); var side2 = Vector3.Cross(normal, side1); AddIndex(CurrentVertex + 0); AddIndex(CurrentVertex + 1); AddIndex(CurrentVertex + 2); AddIndex(CurrentVertex + 0); AddIndex(CurrentVertex + 2); AddIndex(CurrentVertex + 3); AddVertex((normal - side1 - side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal - side1 + side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal + side1 + side2) * size / 2, normal, textureCoordinates[index++]); AddVertex((normal + side1 - side2) * size / 2, normal, textureCoordinates[index++]); } } protected void StripTexturesFromCube() { PixelArray = new Color[Cube.Size * Cube.Size]; for (int s = 0; s < CubeFaces.Length; s++) { CubeFaces[s] = new Texture2D(Game.GraphicsDevice, Cube.Size, Cube.Size, false, SurfaceFormat.Color); switch (s) { case 0: Cube.GetData<Color>(CubeMapFace.PositiveX, PixelArray); CubeFaces[s].SetData<Color>(PixelArray); break; case 1: Cube.GetData(CubeMapFace.NegativeX, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 2: Cube.GetData(CubeMapFace.PositiveY, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 3: Cube.GetData(CubeMapFace.NegativeY, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 4: Cube.GetData(CubeMapFace.PositiveZ, PixelArray); CubeFaces[s].SetData(PixelArray); break; case 5: Cube.GetData(CubeMapFace.NegativeZ, PixelArray); CubeFaces[s].SetData(PixelArray); break; } } } protected void AddVertex(Vector3 position, Vector3 normal, Vector2 textureCoordinates) { _vertices.Add(new VertexPositionNormalTexture(position, normal, textureCoordinates)); } protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); _indices.Add((ushort)index); } protected int CurrentVertex { get { return _vertices.Count; } } #endregion } }

    Read the article

  • Thematic map contd.

    - by jsharma
    The previous post (creating a thematic map) described the use of an advanced style (color ranged-bucket style). The bucket style definition object has an attribute ('classification') which specifies the data classification scheme to use. It's values can be one of {'equal', 'quantile', 'logarithmic', 'custom'}. We use logarithmic in the previous example. Here we'll describe how to use a custom algorithm for classification. Specifically the Jenks Natural Breaks algorithm. We'll use the Javascript implementation in geostats.js The sample code above needs a few changes which are listed below. Include the geostats.js file after or before including oraclemapsv2.js <script src="geostats.js"></script> Modify the bucket style definition to use custom classification Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}    bucketStyleDef = {       numClasses : colorSeries[colorName].classes,       classification: 'custom', //'logarithmic',  // use a logarithmic scale       algorithm: jenksFromGeostats,       styles: theStyles,       gradient:  useGradient? 'linear' : 'off'     }; The function, which implements the custom classification scheme, is specified as the algorithm attribute value. It must accept two input parameters, an array of OM.feature and the name of the feature attribute (e.g. TOTPOP) to use in the classification, and must return an array of buckets (i.e. an array of or OM.style.Bucket  or OM.style.RangedBucket in this case). However the algorithm also needs to know the number of classes (i.e. the number of buckets to create). So we use a global to pass that info in. (Note: This bug/oversight will be fixed and the custom algorithm will be passed 3 parameters: the features array, attribute name, and number of classes). So createBucketColorStyle() has the following changes Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} var numClasses ; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) {    var theBucketStyle;    var bucketStyleDef;    var theStyles = [];    //var numClasses ; numClasses = colorSeries[colorName].classes; ... and the function jenksFromGeostats is defined as Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} function jenksFromGeostats(featureArray, columnName) {    var items = [] ; // array of attribute values to be classified    $.each(featureArray, function(i, feature) {         items.push(parseFloat(feature.getAttributeValue(columnName)));    });    // create the geostats object    var theSeries = new geostats(items);    // call getJenks which returns an array of bounds    var theClasses = theSeries.getJenks(numClasses);    if(theClasses)    {     theClasses[theClasses.length-1]=parseFloat(theClasses[theClasses.length-1])+1;    }    else    {     alert(' empty result from getJenks');    }    var theBuckets = [], aBucket=null ;    for(var k=0; k<numClasses; k++)    {             aBucket = new OM.style.RangedBucket(             {low:parseFloat(theClasses[k]),               high:parseFloat(theClasses[k+1])             });             theBuckets.push(aBucket);     }     return theBuckets; } A screenshot of the resulting map with 5 classes is shown below. It is also possible to simply create the buckets and supply them when defining the Bucket style instead of specifying the function (algorithm). In that case the bucket style definition object would be    bucketStyleDef = {      numClasses : colorSeries[colorName].classes,      classification: 'custom',        buckets: theBuckets, //since we are supplying all the buckets      styles: theStyles,      gradient:  useGradient? 'linear' : 'off'    };

    Read the article

  • CCSpriteHole in cocos2d 2.0?

    - by rakkarage
    i was using this cocos2d class CCSpriteHole in cocos2d 1.0 fine... http://jpsarda.tumblr.com/post/15779708304/new-cocos2d-iphone-extensions-a-progress-bar-and-a i am trying to convert it to cocos2d 2.0... i got it to compile by changing glVertexPointer to glVertexAttribPointer like in the 2.0 version of CCSpriteScale9 here http://jpsarda.tumblr.com/post/9162433577/scale9grid-for-cocos2d and changing contentSizeInPixels_ to contentSize_... -(id) init { if( (self=[super init]) ) { opacityModifyRGB_ = YES; opacity_ = 255; color_ = colorUnmodified_ = ccWHITE; capSize=capSizeInPixels=CGSizeZero; //Not used blendFunc_.src = CC_BLEND_SRC; blendFunc_.dst = CC_BLEND_DST; // update texture (calls updateBlendFunc) [self setTexture:nil]; // default transform anchor anchorPoint_ = ccp(0.5f, 0.5f); vertexDataCount=24; vertexData = (ccV2F_C4F_T2F*) malloc(vertexDataCount * sizeof(ccV2F_C4F_T2F)); [self setTextureRectInPixels:CGRectZero untrimmedSize:CGSizeZero]; } return self; } -(id) initWithTexture:(CCTexture2D*)texture rect:(CGRect)rect { NSAssert(texture!=nil, @"Invalid texture for sprite"); // IMPORTANT: [self init] and not [super init]; if( (self = [self init]) ) { [self setTexture:texture]; [self setTextureRect:rect]; } return self; } -(id) initWithTexture:(CCTexture2D*)texture { NSAssert(texture!=nil, @"Invalid texture for sprite"); CGRect rect = CGRectZero; rect.size = texture.contentSize; return [self initWithTexture:texture rect:rect]; } -(id) initWithFile:(NSString*)filename { NSAssert(filename!=nil, @"Invalid filename for sprite"); CCTexture2D *texture = [[CCTextureCache sharedTextureCache] addImage: filename]; if( texture ) return [self initWithTexture:texture]; return nil; } +(id)spriteWithFile:(NSString*)f { return [[self alloc] initWithFile:f]; } - (void) dealloc { if (vertexData) free(vertexData); } -(void) updateColor { ccColor4F color4; color4.r=(float)color_.r/255.0f; color4.g=(float)color_.g/255.0f; color4.b=(float)color_.b/255.0f; color4.a=(float)opacity_/255.0f; for (int i=0; i<vertexDataCount; i++) { vertexData[i].colors=color4; } } -(void)updateTextureCoords:(CGRect)rect { CCTexture2D *tex = texture_; if(!tex) return; float atlasWidth = (float)tex.pixelsWide; float atlasHeight = (float)tex.pixelsHigh; float left,right,top,bottom; left = rect.origin.x/atlasWidth; right = left + rect.size.width/atlasWidth; top = rect.origin.y/atlasHeight; bottom = top + rect.size.height/atlasHeight; // // |/|/|/| // CGSize capTexCoordsSize=CGSizeMake(capSizeInPixels.width/atlasWidth, capSizeInPixels.height/atlasHeight); // From left to right //Top band // Left vertexData[0].texCoords=(ccTex2F){left,top}; vertexData[1].texCoords=(ccTex2F){left,top+capTexCoordsSize.height}; vertexData[2].texCoords=(ccTex2F){left+capTexCoordsSize.width,top}; vertexData[3].texCoords=(ccTex2F){left+capTexCoordsSize.width,top+capTexCoordsSize.height}; // Center vertexData[4].texCoords=(ccTex2F){right-capTexCoordsSize.width,top}; vertexData[5].texCoords=(ccTex2F){right-capTexCoordsSize.width,top+capTexCoordsSize.height}; // Right vertexData[6].texCoords=(ccTex2F){right,top}; vertexData[7].texCoords=(ccTex2F){right,top+capTexCoordsSize.height}; //Center band // Left vertexData[8].texCoords=(ccTex2F){left,bottom-capTexCoordsSize.height}; vertexData[9].texCoords=(ccTex2F){left,top+capTexCoordsSize.height}; vertexData[10].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom-capTexCoordsSize.height}; vertexData[11].texCoords=(ccTex2F){left+capTexCoordsSize.width,top+capTexCoordsSize.height}; // Center vertexData[12].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom-capTexCoordsSize.height}; vertexData[13].texCoords=(ccTex2F){right-capTexCoordsSize.width,top+capTexCoordsSize.height}; // Right vertexData[14].texCoords=(ccTex2F){right,bottom-capTexCoordsSize.height}; vertexData[15].texCoords=(ccTex2F){right,top+capTexCoordsSize.height}; //Bottom band //Left vertexData[16].texCoords=(ccTex2F){left,bottom}; vertexData[17].texCoords=(ccTex2F){left,bottom-capTexCoordsSize.height}; vertexData[18].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom}; vertexData[19].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom-capTexCoordsSize.height}; // Center vertexData[20].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom}; vertexData[21].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom-capTexCoordsSize.height}; // Right vertexData[22].texCoords=(ccTex2F){right,bottom}; vertexData[23].texCoords=(ccTex2F){right,bottom-capTexCoordsSize.height}; } -(void) updateVertices { float left=0; //-spriteSizeInPixels.width*0.5f; float right=left+contentSize_.width; float bottom=0; //-spriteSizeInPixels.height*0.5f; float top=bottom+contentSize_.height; float holeLeft=holeRect.origin.x*CC_CONTENT_SCALE_FACTOR(); float holeRight=holeLeft+holeRect.size.width*CC_CONTENT_SCALE_FACTOR(); float holeBottom=holeRect.origin.y*CC_CONTENT_SCALE_FACTOR(); float holeTop=holeBottom+holeRect.size.height*CC_CONTENT_SCALE_FACTOR(); // // |/|/|/| // // From left to right //Top band // Left vertexData[0].vertices=(ccVertex2F){left,top}; vertexData[1].vertices=(ccVertex2F){left,holeTop}; vertexData[2].vertices=(ccVertex2F){holeLeft,top}; vertexData[3].vertices=(ccVertex2F){holeLeft,holeTop}; // Center vertexData[4].vertices=(ccVertex2F){holeRight,top}; vertexData[5].vertices=(ccVertex2F){holeRight,holeTop}; // Right vertexData[6].vertices=(ccVertex2F){right,top}; vertexData[7].vertices=(ccVertex2F){right,holeTop}; //Center band // Left vertexData[8].vertices=(ccVertex2F){left,holeBottom}; vertexData[9].vertices=(ccVertex2F){left,holeTop}; vertexData[10].vertices=(ccVertex2F){holeLeft,holeBottom}; vertexData[11].vertices=(ccVertex2F){holeLeft,holeTop}; // Center vertexData[12].vertices=(ccVertex2F){holeRight,holeBottom}; vertexData[13].vertices=(ccVertex2F){holeRight,holeTop}; // Right vertexData[14].vertices=(ccVertex2F){right,holeBottom}; vertexData[15].vertices=(ccVertex2F){right,holeTop}; //Bottom band //Left vertexData[16].vertices=(ccVertex2F){left,bottom}; vertexData[17].vertices=(ccVertex2F){left,holeBottom}; vertexData[18].vertices=(ccVertex2F){holeLeft,bottom}; vertexData[19].vertices=(ccVertex2F){holeLeft,holeBottom}; // Center vertexData[20].vertices=(ccVertex2F){holeRight,bottom}; vertexData[21].vertices=(ccVertex2F){holeRight,holeBottom}; // Right vertexData[22].vertices=(ccVertex2F){right,bottom}; vertexData[23].vertices=(ccVertex2F){right,holeBottom}; } -(void) setHole:(CGRect)r inRect:(CGRect)totalSurface { holeRect=r; self.contentSize=totalSurface.size; holeRect.origin=ccpSub(holeRect.origin,totalSurface.origin); CGPoint holeCenter=ccp(holeRect.origin.x+holeRect.size.width*0.5f,holeRect.origin.y+holeRect.size.height*0.5f); self.anchorPoint=ccp(holeCenter.x/contentSize_.width,holeCenter.y/contentSize_.height); //[self updateTextureCoords:rectInPixels_]; [self updateVertices]; [self updateColor]; } -(void) draw { BOOL newBlend = NO; if( blendFunc_.src != CC_BLEND_SRC || blendFunc_.dst != CC_BLEND_DST ) { newBlend = YES; glBlendFunc( blendFunc_.src, blendFunc_.dst ); } glBindTexture(GL_TEXTURE_2D, [texture_ name]); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); if( newBlend ) glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST); } -(void)setTextureRectInPixels:(CGRect)rect untrimmedSize:(CGSize)untrimmedSize { rectInPixels_ = rect; rect_ = CC_RECT_PIXELS_TO_POINTS( rect ); //[self setContentSizeInPixels:untrimmedSize]; [self updateTextureCoords:rectInPixels_]; } -(void)setTextureRect:(CGRect)rect { CGRect rectInPixels = CC_RECT_POINTS_TO_PIXELS( rect ); [self setTextureRectInPixels:rectInPixels untrimmedSize:rectInPixels.size]; } // // RGBA protocol // #pragma mark CCSpriteHole - RGBA protocol -(GLubyte) opacity { return opacity_; } -(void) setOpacity:(GLubyte) anOpacity { opacity_ = anOpacity; // special opacity for premultiplied textures if( opacityModifyRGB_ ) [self setColor: (opacityModifyRGB_ ? colorUnmodified_ : color_ )]; [self updateColor]; } - (ccColor3B) color { if(opacityModifyRGB_){ return colorUnmodified_; } return color_; } -(void) setColor:(ccColor3B)color3 { color_ = colorUnmodified_ = color3; if( opacityModifyRGB_ ){ color_.r = color3.r * opacity_/255; color_.g = color3.g * opacity_/255; color_.b = color3.b * opacity_/255; } [self updateColor]; } -(void) setOpacityModifyRGB:(BOOL)modify { ccColor3B oldColor = self.color; opacityModifyRGB_ = modify; self.color = oldColor; } -(BOOL) doesOpacityModifyRGB { return opacityModifyRGB_; } #pragma mark CCSpriteHole - CocosNodeTexture protocol -(void) updateBlendFunc { if( !texture_ || ! [texture_ hasPremultipliedAlpha] ) { blendFunc_.src = GL_SRC_ALPHA; blendFunc_.dst = GL_ONE_MINUS_SRC_ALPHA; [self setOpacityModifyRGB:NO]; } else { blendFunc_.src = CC_BLEND_SRC; blendFunc_.dst = CC_BLEND_DST; [self setOpacityModifyRGB:YES]; } } -(void) setTexture:(CCTexture2D*)texture { // accept texture==nil as argument NSAssert( !texture || [texture isKindOfClass:[CCTexture2D class]], @"setTexture expects a CCTexture2D. Invalid argument"); texture_ = texture; [self updateBlendFunc]; } -(CCTexture2D*) texture { return texture_; } @end but now positioning and scaling seem to not work? and it starts in the wrong position... but changing the opacity still works. so i was wondering if anyone can see why my 2.0 version is not working? or if maybe there is a better way to do a sprite hole with cocos2d/opengl 2.0? shaders? thanks

    Read the article

  • How to get friendly device name from DEV_BROADCAST_DEVICEINTERFACE and Device Instance ID

    - by snicker
    I've registered a window with RegisterDeviceNotification and can successfully recieve DEV_BROADCAST_DEVICEINTERFACE messages. However, the dbcc_name field in the returned struct is always empty. The struct I have is defined as such: [StructLayout(LayoutKind.Sequential)] public struct DEV_BROADCAST_DEVICEINTERFACE { public int dbcc_size; public int dbcc_devicetype; public int dbcc_reserved; public Guid dbcc_classguid; [MarshalAs(UnmanagedType.LPStr)] public string dbcc_name; } And I'm using Marshal.PtrToStructure on the LParam of the WM_DEVICECHANGE message. Should this be working? Or even better... Is there an alternative way to get the name of a device upon connection? EDIT (02/05/2010 20:56GMT): I found out how to get the dbcc_name field to populate by doing this: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] public struct DEV_BROADCAST_DEVICEINTERFACE { public int dbcc_size; public int dbcc_devicetype; public int dbcc_reserved; public Guid dbcc_classguid; [MarshalAs(UnmanagedType.ByValTStr, SizeConst=255)] public string dbcc_name; } but I still need a way to get a "Friendly" name from what is int dbcc_name. It looks like the following: \?\USB#VID_05AC&PID_1294&MI_00#0#{6bdd1fc6-810f-11d0-bec7-08002be2092f} And I really just want it to say "Apple iPhone" (which is what the device is in this case).

    Read the article

  • Boost.MultiIndex: Are there way to share object between two processes?

    - by Arman
    Hello, I have a Boost.MultiIndex big array about 10Gb. In order to reduce the reading I thought there should be a way to keep the data in the memory and another client programs will be able to read and analyse it. What is the proper way to organize it? The array looks like: struct particleID { int ID;// real ID for particle from Gadget2 file "ID" block unsigned int IDf;// postition in the file particleID(int id,const unsigned int idf):ID(id),IDf(idf){} bool operator<(const particleID& p)const { return ID<p.ID;} unsigned int getByGID()const {return (ID&0x0FFF);}; }; struct ID{}; struct IDf{}; struct IDg{}; typedef multi_index_container< particleID, indexed_by< ordered_unique< tag<IDf>, BOOST_MULTI_INDEX_MEMBER(particleID,unsigned int,IDf)>, ordered_non_unique< tag<ID>,BOOST_MULTI_INDEX_MEMBER(particleID,int,ID)>, ordered_non_unique< tag<IDg>,BOOST_MULTI_INDEX_CONST_MEM_FUN(particleID,unsigned int,getByGID)> > > particlesID_set; Any ideas are welcome. kind regards Arman.

    Read the article

  • PJSIP Custom Registration Header (iOS)

    - by Daniel Redington
    I am attempting to setup SIP communication with an internal server (using the PJSIP library), however, this server requires a custom header field with a specified header value for the REGISTRATION call. For example's sake we'll call this required header "MyHeader". From what I've found, the pjsua_acc_add() function will add an account and register it to the server using a config struct. The parameter "reg_hdr_list" of the config struct has the description "The optional custom SIP headers to be put in the registration request." Which sounds like exactly what I need, however doesn't seem to have any effect on the call itself. Here's what I have so far: pjsua_acc_config cfg; pjsua_acc_config_default(&cfg); //...Some other config stuff related to the server... pjsip_hdr test; test.name = pj_str("MyHeader"); test.sname = pj_str("MyHdr"); test.type = PJSIP_H_OTHER; test.prev = cfg.reg_hdr_list.prev; test.next = cfg.reg_hdr_list.next; cfg.reg_hdr_list = test; pj_status_t status; status = pjsua_acc_add(&cfg, PJ_TRUE, &acc_id); From the server side, there are no extra header fields or anything. And the struct that is used to define the header (pjsua_hdr) has no "value" or equivalent field, so even if it did create the header, how does it set the value? Here's the refrence for the header list definition: Link And the reference for the header struct: Link Any help would be greatly appreciated!

    Read the article

  • jQuery / jqGrids / Submitting form data troubles...

    - by Kelso
    Ive been messing with jqgrids alot of the last couple days, and I have nearly everything the way I want it from the display, tabs with different grids, etc. Im wanting to make use of Modal for adding and editing elements on my grid. My problem that Im running into is this. I have my editurl:"editsu.php" set, if that file is renamed, on edit, i get a 404 in the modal.. great! However, with that file in place, nothing at all seems to happen. I even put a die("testing"); line at the top, so it sees the file, it just doesnt do anything with it. Below is the content. ........ the index page jQuery("#landings").jqGrid({ url:'server.php?tid=1', datatype: "json", colNames:['ID','Tower','Sector', 'Client', 'VLAN','IP','DLink','ULink','Service','Lines','Freq','Radio','Serial','Mac'], colModel:[ {name:'id', index:'id', width : 50, align: 'center', sortable:true,editable:true,editoptions:{size:10}}, {name:'tower', index:'tower', width : 85, align: 'center', sortable:true,editable:false,editoptions:{readonly:true,size:30}}, {name:'sector', index:'sector', width : 50, align: 'center',sortable:true,editable:true,editoptions:{readonly:true,size:20}}, {name:'customer',index:'customer', width : 175, align: 'left', editable:true,editoptions:{readonly:true,size:35}}, {name:'vlan', index:'vlan', width : 35, align: 'left',editable:true,editoptions:{size:10}}, {name:'suip', index:'suip', width : 65, align: 'left',editable:true,editoptions:{size:20}}, {name:'datadl',index:'datadl', width:55, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from datatypes"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'dataul', index:'dataul', width : 55, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from datatypes"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'servicetype', index:'servicetype', width : 85, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from servicetype"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'voicelines', index:'voicelines', width : 35, align: 'center',editable:true,editoptions:{size:30}}, {name:'freqname', index:'freqname', width : 35, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from freqband"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'radioname', index:'radioname', width : 120, editable: true,edittype:"select",editoptions:{value:"<? $qr = qquery("select * from radiotype"); while ($q = ffetch($qr)) {echo "$q[id]:$q[name];";}?>"}}, {name:'serial', index:'serial', width : 100, align: 'right',editable:true,editoptions:{size:20}}, {name:'mac', index:'mac', width : 120, align: 'right',editable:true,editoptions:{size:20}} ], rowNum:20, rowList:[30,50,70], pager: '#pagerl', sortname: 'sid', mtype: "GET", viewrecords: true, sortorder: "asc", altRows: true, caption:"Landings", editurl:"editsu.php", height:420 }); jQuery("#landings").jqGrid('navGrid','#pagerl',{edit:true,add:true,del:false,search:false},{height:400,reloadAfterSubmit:false},{height:400,reloadAfterSubmit:false},{reloadAfterSubmit:false},{}); now for the editsu.php file.. $operation = $_REQUEST['oper']; if ($operation == "edit") { qquery("UPDATE customers SET vlan = '".$_POST['vlan']."', datadl = '".$_POST['datadl']."', dataul = '".$_POST['dataul']."', servicetype = '".$_POST['servicetype']."', voicelines = '".$_POST['voicelines']."', freqname = '".$_POST['freqname']."', radioname = '".$_POST['radioname']."', serial = '".$_POST['serial']."', mac = '".$_POST['mac']."' WHERE id = '".$_POST['id']."'") or die(mysql_error()); } Im just having a hard time troubleshooting this to figure out where its getting hung up at. My next question after this would be to see if its possible to make it so when you click "add", that it auto inserts a row into the db with a couple variable predtermined and then bring up the modal window, but ill work on the first problem first. thanks!

    Read the article

  • How can I reset addAttributeToFilter in Magento searches

    - by Bobby
    I'm having problems getting the addAttributeToFilter function within a loop to behave in Magento. I have test data in my store to support searches for all of the following data; $attributeSelections=array( array('size' => 44, 'color' => 67, 'manufacturer' => 17), array('size' => 43, 'color' => 69, 'manufacturer' => 17), array('size' => 42, 'color' => 70, 'manufacturer' => 17)); And my code to search through these combinations; foreach ($attributeSelections as $selection) { $searcher = Mage::getSingleton('catalogsearch/advanced')->getProductCollection(); foreach ($selection as $k => $v) { $searcher->addAttributeToFilter("$k", array('eq' => "$v")); echo "$k: $v<br />"; } $result=$searcher->getData(); print_r($result); } This loop gives the following results (slightly sanitised for veiwing pleasure); size: 44 color: 67 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) size: 43 color: 69 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) size: 42 color: 70 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) So my loop is function and generating the search. However, the values fed into addAttributeToFilter on the first itteration of the loop seem to remain stored for each search. I've tried clearing my search object, for example, unset($searcher) and unset($result). I've also tried magento functions such as getNewEmptyItem(), resetData(), distinct() and clear() but none have the desired effect. Basically what I am trying to do is check for duplicate products before my script attempts to programatically create a product with these attribute combinations. The array of attribute selections may be of varying sizes hence the need for a loop. I would be very appreiciative if anyone might be able to shed some light on my problem.

    Read the article

  • Understanding evaluation of expressions containing '++' and '->' operators in C.

    - by Leif Ericson
    Consider this example: struct { int num; } s, *ps; s.num = 0; ps = &s; ++ps->num; printf("%d", s.num); /* Prints 1 */ It prints 1. So I understand that it is because according to operators precedence, -> is higher than ++, so the value ps->num (which is 0) is firstly fetched and then the ++ operator operates on it, so it increments it to 1. struct { int num; } s, *ps; s.num = 0; ps = &s; ps++->num; printf("%d", s.num); /* Prints 0 */ In this example I get 0 and I don't understand why; the explanation of the first example should be the same for this example. But it seems that this expression is evaluated as follows: At first, the operator ++ operates, and it operates on ps, so it increments it to the next struct. Only then -> operates and it does nothing because it just fetches the num field of the next struct and does nothing with it. But it contradicts the precedence of operators, which says that -> have higher precedence than ++. Can someone explain this behavior? Edit: After reading two answers which refer to a C++ precedence tables which indicate that a prefix ++/-- operators have lower precedence than ->, I did some googling and came up with this link that states that this rule applies also to C itself. It fits exactly and fully explains this behavior, but I must add that the table in this link contradicts a table in my own copy of K&R ANSI C. So if you have suggestions as to which source is correct I would like to know. Thanks.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >