Search Results

Search found 17068 results on 683 pages for 'merge array'.

Page 478/683 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • GLSL: Strange light reflections

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices? SOLVED Solved... 3 days needed for changing one letter from this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(12*sizeof(float)) // array buffer offset ); to this: glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexAttribPointer ( 4, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? sizeof(VboVertex), // stride (void*)(11*sizeof(float)) // array buffer offset ); see difference? :)

    Read the article

  • C to C++ Conversion [closed]

    - by Annalyne
    Can someone convert this code to C++, pretty please? :( #include <stdio.h> #include <stdlib.h> #include <time.h> #define WEAPON_ROPE 10 #define WEAPON_REVOLVER 20 #define WEAPON_LEADPIPE 30 #define WEAPON_CANDLESTICK 40 #define WEAPON_KNIFE 50 #define WEAPON_WRENCH 60 #define PEOPLE_MRGREEN 100 #define PEOPLE_MSSCARLET 200 #define PEOPLE_CONLMUSTARD 300 #define PEOPLE_PROFPLUM 400 #define PEOPLE_MISPEACOCK 500 #define PEOPLE_MISWHITE 600 #define PLACE_KITCHEN 1 #define PLACE_HALL 2 #define PLACE_POOLROOM 3 #define PLACE_STUDY 4 #define PLACE_LOUNG 5 #define PLACE_LIBRARY 6 #define PLACE_CONSERVATORY 7 #define PLACE_DINING 8 #define PLACE_BILLIARDS 9 int main() { int die = 0; int players[6][9] = {{0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}}; int allCards[] = {WEAPON_ROPE, WEAPON_REVOLVER, WEAPON_LEADPIPE, WEAPON_CANDLESTICK, WEAPON_CANDLESTICK, WEAPON_KNIFE, WEAPON_WRENCH, PEOPLE_MRGREEN, PEOPLE_MSSCARLET, PEOPLE_CONLMUSTARD, PEOPLE_CONLMUSTARD, PEOPLE_PROFPLUM, PEOPLE_MISPEACOCK, PEOPLE_MISWHITE, PLACE_KITCHEN, PLACE_HALL, PLACE_POOLROOM, PLACE_STUDY, PLACE_LOUNG, PLACE_LIBRARY, PLACE_CONSERVATORY, PLACE_DINING, PLACE_BILLIARDS}; int deckSize = 23; // number of cards in allCards array int count; for (count = 0; count < deckSize; ++count) { printf(", %d", allCards[count]); } // End for // These three array's are so you can put a card back, if need be... int weaponCards[] = {WEAPON_ROPE, WEAPON_REVOLVER, WEAPON_LEADPIPE, WEAPON_CANDLESTICK, WEAPON_CANDLESTICK, WEAPON_KNIFE, WEAPON_WRENCH}; int weaponDeckSize = 7; int peopleCards[] = {PEOPLE_MRGREEN, PEOPLE_MSSCARLET, PEOPLE_CONLMUSTARD, PEOPLE_CONLMUSTARD, PEOPLE_PROFPLUM, PEOPLE_MISPEACOCK, PEOPLE_MISWHITE}; int peopleDeckSize = 7; int placeCards[] = {PLACE_KITCHEN, PLACE_HALL, PLACE_POOLROOM, PLACE_STUDY, PLACE_LOUNG, PLACE_LIBRARY, PLACE_CONSERVATORY, PLACE_DINING, PLACE_BILLIARDS}; int placeDeckSize = 9; srand(clock()); // seed rand() using clock() which gives // the current tick your processor is at... int killer[3]; // no need to initialize yet. killer[0-2] will initialize int deckShuffle = rand() % weaponDeckSize; // picks one number out of the deck killer[0] = weaponCards[deckShuffle]; allCards[deckShuffle] = 0; // Card drawn. No longer exists in deck deckShuffle = rand() % peopleDeckSize; // picks another random card out of the deck killer[1] = peopleCards[deckShuffle]; allCards[deckShuffle + weaponDeckSize] = 0; // Card drawn. No longer exists in deck deckShuffle = rand() % placeDeckSize; // randomly picks the last card needed killer[2] = placeCards[deckShuffle]; allCards[deckShuffle + weaponDeckSize + peopleDeckSize] = 0; // Card drawn. No longer exists in deck int numberOfCards = 0; printf("CLUE\n"); printf("written by John Schintone\n"); printf("Origonal game delvoped by Hasbro\n"); int numberOfPlayers = 0; while ((numberOfPlayers < 3) || (numberOfPlayers > 6)) { printf("How many players are Going to play :\n"); printf("[number] > "); scanf("%d",&numberOfPlayers); // A very fast if statement which only uses integers/char's switch(numberOfPlayers) { case 6: { numberOfCards = 3; } break; case 5: { numberOfCards = 4; } break; case 4: { numberOfCards = 5; } break; case 3: { numberOfCards = 6; } break; default: { printf("You must enter a number between 3 and 6...\n"); } // End default } // End switch } // End while int index1, index2; // Note: ++index1; is faster than index1++; and will almost always // produce better code (index1++ happens after this statement line. // ++index1 increments index1 before this statement line) for (index1 = 0; index1 < numberOfPlayers; ++index1) { printf("Player %d", index1); for (index2 = 0; index2 < numberOfCards; ++index2) { // Remember that allCards[deckShuffle] == 0 because we removed that // card ages ago... works out well, just don't forget you did that : ) while (allCards[deckShuffle] == 0) { deckShuffle = rand() % deckSize; } // End while players[index1][index2] = allCards[deckShuffle]; allCards[deckShuffle] = 0; // Card removed for after loop... printf(", %d", players[index1][index2]); switch(players[index1][index2]) { case WEAPON_ROPE: { } break; // Add more... case PEOPLE_MRGREEN: { } break; // Add more... case PLACE_KITCHEN: { } break; // Add more... default: { printf("Program has caught player %d cheating...", index1); } // End default } // End switch } // End for printf("\n"); } // End for printf("The killer is %d, with the %d, and in the %d \n\n", killer[0], killer[1], killer[2]); printf("Type h for this help... \n"); printf("Type e to escape... \n"); printf("Type r to roll the die... \n"); char command = '\0'; // \0 represents zero, or the null character while (command != 'e') { printf("[one character] > "); scanf("%c", &command); if (command == 'r') { die = rand() % 6 + 1; printf("Your number is: %d \n", die); } // end while if (command == 'h') { printf("Type h for this help... \n"); printf("Type e to escape... \n"); printf("Type r to roll the die... \n"); } // End if printf("\n"); } // End while return(0); // Success. Program worked ok } // End main() Function

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Having trouble with pathfinding

    - by user2144536
    I'm trying to implement pathfinding in a game I'm programming using this method. I'm implementing it with recursion but some of the values after the immediate circle of tiles around the player are way off. For some reason I cannot find the problem with it. This is a screen cap of the problem: The pathfinding values are displayed in the center of every tile. Clipped blocks are displayed with the value of 'c' because the values were too high and were covering up the next value. The red circle is the first value that is incorrect. The code below is the recursive method. //tileX is the coordinates of the current tile, val is the current pathfinding value, used[][] is a boolean //array to keep track of which tiles' values have already been assigned public void pathFind(int tileX, int tileY, int val, boolean[][] used) { //increment pathfinding value int curVal = val + 1; //set current tile to true if it hasn't been already used[tileX][tileY] = true; //booleans to know which tiles the recursive call needs to be used on boolean topLeftUsed = false, topUsed = false, topRightUsed = false, leftUsed = false, rightUsed = false, botomLeftUsed = false, botomUsed = false, botomRightUsed = false; //set value of top left tile if necessary if(tileX - 1 >= 0 && tileY - 1 >= 0) { //isClipped(int x, int y) returns true if the coordinates givin are in a tile that can't be walked through (IE walls) //occupied[][] is an array that keeps track of which tiles have an enemy in them // //if the tile is not clipped and not occupied set the pathfinding value if(isClipped((tileX - 1) * 50 + 25, (tileY - 1) * 50 + 25) == false && occupied[tileX - 1][tileY - 1] == false && !(used[tileX - 1][tileY - 1])) { pathFindingValues[tileX - 1][tileY - 1] = curVal; topLeftUsed = true; used[tileX - 1][tileY - 1] = true; } //if it is occupied set it to an arbitrary high number so enemies find alternate routes if the best is clogged if(occupied[tileX - 1][tileY - 1] == true) pathFindingValues[tileX - 1][tileY - 1] = 1000000000; //if it is clipped set it to an arbitrary higher number so enemies don't travel through walls if(isClipped((tileX - 1) * 50 + 25, (tileY - 1) * 50 + 25) == true) pathFindingValues[tileX - 1][tileY - 1] = 2000000000; } //top middle if(tileY - 1 >= 0 ) { if(isClipped(tileX * 50 + 25, (tileY - 1) * 50 + 25) == false && occupied[tileX][tileY - 1] == false && !(used[tileX][tileY - 1])) { pathFindingValues[tileX][tileY - 1] = curVal; topUsed = true; used[tileX][tileY - 1] = true; } if(occupied[tileX][tileY - 1] == true) pathFindingValues[tileX][tileY - 1] = 1000000000; if(isClipped(tileX * 50 + 25, (tileY - 1) * 50 + 25) == true) pathFindingValues[tileX][tileY - 1] = 2000000000; } //top right if(tileX + 1 <= used.length && tileY - 1 >= 0) { if(isClipped((tileX + 1) * 50 + 25, (tileY - 1) * 50 + 25) == false && occupied[tileX + 1][tileY - 1] == false && !(used[tileX + 1][tileY - 1])) { pathFindingValues[tileX + 1][tileY - 1] = curVal; topRightUsed = true; used[tileX + 1][tileY - 1] = true; } if(occupied[tileX + 1][tileY - 1] == true) pathFindingValues[tileX + 1][tileY - 1] = 1000000000; if(isClipped((tileX + 1) * 50 + 25, (tileY - 1) * 50 + 25) == true) pathFindingValues[tileX + 1][tileY - 1] = 2000000000; } //left if(tileX - 1 >= 0) { if(isClipped((tileX - 1) * 50 + 25, (tileY) * 50 + 25) == false && occupied[tileX - 1][tileY] == false && !(used[tileX - 1][tileY])) { pathFindingValues[tileX - 1][tileY] = curVal; leftUsed = true; used[tileX - 1][tileY] = true; } if(occupied[tileX - 1][tileY] == true) pathFindingValues[tileX - 1][tileY] = 1000000000; if(isClipped((tileX - 1) * 50 + 25, (tileY) * 50 + 25) == true) pathFindingValues[tileX - 1][tileY] = 2000000000; } //right if(tileX + 1 <= used.length) { if(isClipped((tileX + 1) * 50 + 25, (tileY) * 50 + 25) == false && occupied[tileX + 1][tileY] == false && !(used[tileX + 1][tileY])) { pathFindingValues[tileX + 1][tileY] = curVal; rightUsed = true; used[tileX + 1][tileY] = true; } if(occupied[tileX + 1][tileY] == true) pathFindingValues[tileX + 1][tileY] = 1000000000; if(isClipped((tileX + 1) * 50 + 25, (tileY) * 50 + 25) == true) pathFindingValues[tileX + 1][tileY] = 2000000000; } //botom left if(tileX - 1 >= 0 && tileY + 1 <= used[0].length) { if(isClipped((tileX - 1) * 50 + 25, (tileY + 1) * 50 + 25) == false && occupied[tileX - 1][tileY + 1] == false && !(used[tileX - 1][tileY + 1])) { pathFindingValues[tileX - 1][tileY + 1] = curVal; botomLeftUsed = true; used[tileX - 1][tileY + 1] = true; } if(occupied[tileX - 1][tileY + 1] == true) pathFindingValues[tileX - 1][tileY + 1] = 1000000000; if(isClipped((tileX - 1) * 50 + 25, (tileY + 1) * 50 + 25) == true) pathFindingValues[tileX - 1][tileY + 1] = 2000000000; } //botom middle if(tileY + 1 <= used[0].length) { if(isClipped((tileX) * 50 + 25, (tileY + 1) * 50 + 25) == false && occupied[tileX][tileY + 1] == false && !(used[tileX][tileY + 1])) { pathFindingValues[tileX][tileY + 1] = curVal; botomUsed = true; used[tileX][tileY + 1] = true; } if(occupied[tileX][tileY + 1] == true) pathFindingValues[tileX][tileY + 1] = 1000000000; if(isClipped((tileX) * 50 + 25, (tileY + 1) * 50 + 25) == true) pathFindingValues[tileX][tileY + 1] = 2000000000; } //botom right if(tileX + 1 <= used.length && tileY + 1 <= used[0].length) { if(isClipped((tileX + 1) * 50 + 25, (tileY + 1) * 50 + 25) == false && occupied[tileX + 1][tileY + 1] == false && !(used[tileX + 1][tileY + 1])) { pathFindingValues[tileX + 1][tileY + 1] = curVal; botomRightUsed = true; used[tileX + 1][tileY + 1] = true; } if(occupied[tileX + 1][tileY + 1] == true) pathFindingValues[tileX + 1][tileY + 1] = 1000000000; if(isClipped((tileX + 1) * 50 + 25, (tileY + 1) * 50 + 25) == true) pathFindingValues[tileX + 1][tileY + 1] = 2000000000; } //call the method on the tiles that need it if(tileX - 1 >= 0 && tileY - 1 >= 0 && topLeftUsed) pathFind(tileX - 1, tileY - 1, curVal, used); if(tileY - 1 >= 0 && topUsed) pathFind(tileX , tileY - 1, curVal, used); if(tileX + 1 <= used.length && tileY - 1 >= 0 && topRightUsed) pathFind(tileX + 1, tileY - 1, curVal, used); if(tileX - 1 >= 0 && leftUsed) pathFind(tileX - 1, tileY, curVal, used); if(tileX + 1 <= used.length && rightUsed) pathFind(tileX + 1, tileY, curVal, used); if(tileX - 1 >= 0 && tileY + 1 <= used[0].length && botomLeftUsed) pathFind(tileX - 1, tileY + 1, curVal, used); if(tileY + 1 <= used[0].length && botomUsed) pathFind(tileX, tileY + 1, curVal, used); if(tileX + 1 <= used.length && tileY + 1 <= used[0].length && botomRightUsed) pathFind(tileX + 1, tileY + 1, curVal, used); }

    Read the article

  • How to split a string with negative numbers using ActionScript 3.0

    - by inzombiak
    I'm having trouble loading my level. I'm using Ogmo to create my level then I import it. I have no problem converting 0's and 1's into an Array, but I can't figure out how to do the same for -1's. It separates the "-" and the "1". Any help would be great. I've posted my code and the XML files below levelXML = new XML(e.target.data); playerX = int(levelXML.Entities.Player.@x); playerY = int(levelXML.Entities.Player.@y); levelGrid = levelXML.Grid; levelGrid = levelGrid.split("\n").join(""); levelTiles = levelXML.Tiles; levelTiles = levelTiles.split("\n").join(""); levelTiles = levelTiles.split(",").join(""); tileArray = levelTiles.split(""); gridArray = levelGrid.split(""); for(i = 0; i <= 34; i++) { levelArray[i] = new Array(); for(j = 0; j <= 34; j++) { if(tileArray[j*35 + i] == 0) { gridArray[j*35+i] = -1; } var currentSymbol = gridArray[j*35+i]; levelArray[i][j] = currentSymbol; if(gridArray[j*35 + i] == 1) { wall = new Wall; addChild(wall); wall.x = i*20 + 10; wall.y = j*20 + 10; } else if(gridArray[j*35 + i] == -1) { pellet = new Pellet; addChild(pellet); pellet.x = i*20 + 10; pellet.y = j*20 + 10; } } } I know the code is very dirty, but I needed a quick fix. Grid exportMode="Bitstring" 11111111111111111111111111111111111 10000000000000000011000000000000001 10000000000000000011000000000000001 10011111001111110011001111110011001 10011111001111110011001111110011001 10011111001111110011001111110011001 10000000000000000000000000000000001 10000000000000000000000000000000001 10011111001100111111100110011111001 10011111001100000100000110011111001 10000000001100000100000110000000001 10000000001111100100111110000000001 11111111001111100100111110011111111 00000001001111100100111110010000000 00000001001100000000000110010000000 11111111001100000000000110011111111 00000000000000111111100000000000000 00000000000000100000100000000000000 11111111001100100000100110011111111 00000001001100111111100110010000000 00000001001100000000000110010000000 11111111001100111111100110011111111 10000000000000000100000000000000001 10000000000000000100000000000000001 10011111001111100100111110011111001 10000011000000000000000000011000001 10000011000000000000000000011000001 11110011001100111111100110011001111 11110011001100111111100110011001111 10000000001100000100000110000000001 10000000001100000100000110000000001 10011111111111100100111111111111001 10000000000000000000000000000000001 10000000000000000000000000000000001 11111111111111111111111111111111111 Tiles tileset="Tiles" exportMode="CSV"-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1,-1,-1,-1,-1,-1,-1,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,0,0,0,0,0,0,0,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,0,-1 -1,-1,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1 -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1

    Read the article

  • A* PathFinding Poor Performance

    - by RedShft
    After debugging for a few hours, the algorithm seems to be working. Right now to check if it works i'm checking the end node position to the currentNode position when the while loop quits. So far the values look correct. The problem is, the farther I get from the NPC, who is current stationary, the worse the performance gets. It gets to a point where the game is unplayable less than 10 fps. My current PathGraph is 2500 nodes, which I believe is pretty small, right? Any ideas on how to improve performance? struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } class AStar { private: SList!Node openList; SList!Node closedList; //Node*[4] connections; //The connections of the current node; Node currentNode; //The current node being processed Node[] Path; //The path found; const int connectionCost = 10; Node start, end; ////////////////////////////////////////////////////////// void AddToList(ref SList!Node list, ref Node node ) { list.insert( node ); } void RemoveFrom(ref SList!Node list, ref Node node ) { foreach( elem; list ) { if( node.xIndex == elem.xIndex && node.yIndex == elem.yIndex ) { auto a = find( list[] , elem ); list.linearRemove( take(a, 1 ) ); } } } bool IsInList( SList!Node list, ref Node node ) { foreach( elem; list ) { if( node.xIndex == elem.xIndex && node.yIndex == elem.yIndex ) return true; } return false; } void ClearList( SList!Node list ) { list.clear; } void SetParentNode( ref Node parent, ref Node child ) { child.parent = &parent; } void SetStartAndEndNode( vect2 vStart, vect2 vEnd, Node[] PathGraph ) { int startXIndex, startYIndex; int endXIndex, endYIndex; startXIndex = cast(int)( vStart.x / 32 ); startYIndex = cast(int)( vStart.y / 32 ); endXIndex = cast(int)( vEnd.x / 32 ); endYIndex = cast(int)( vEnd.y / 32 ); foreach( node; PathGraph ) { if( node.xIndex == startXIndex && node.yIndex == startYIndex ) { start = node; } if( node.xIndex == endXIndex && node.yIndex == endYIndex ) { end = node; } } } void SetStartScores( ref Node start ) { start.gScore = 0; start.hScore = CalculateHScore( start, end ); start.fScore = CalculateFScore( start ); } Node GetLowestFScore() { Node lowest; lowest.fScore = 10000; foreach( elem; openList ) { if( elem.fScore < lowest.fScore ) lowest = elem; } return lowest; } //This function current sets the program into an infinite loop //I still need to debug to figure out why the parent nodes aren't correct void GeneratePath() { while( currentNode.position != start.position ) { Path ~= currentNode; currentNode = *currentNode.parent; } } void ReversePath() { Node[] temp; for(int i = Path.length - 1; i >= 0; i-- ) { temp ~= Path[i]; } Path = temp.dup; } public: //@FIXME It seems to find the path, but now performance is terrible void FindPath( vect2 vStart, vect2 vEnd, Node[] PathGraph ) { openList.clear; closedList.clear; SetStartAndEndNode( vStart, vEnd, PathGraph ); SetStartScores( start ); AddToList( openList, start ); while( currentNode.position != end.position ) { currentNode = GetLowestFScore(); if( currentNode.position == end.position ) break; else { RemoveFrom( openList, currentNode ); AddToList( closedList, currentNode ); for( int i = 0; i < currentNode.connections.length; i++ ) { if( currentNode.connections[i] is null ) continue; else { if( IsInList( closedList, *currentNode.connections[i] ) && currentNode.gScore < currentNode.connections[i].gScore ) { currentNode.connections[i].gScore = currentNode.gScore + connectionCost; currentNode.connections[i].hScore = abs( currentNode.connections[i].xIndex - end.xIndex ) + abs( currentNode.connections[i].yIndex - end.yIndex ); currentNode.connections[i].fScore = currentNode.connections[i].gScore + currentNode.connections[i].hScore; currentNode.connections[i].parent = &currentNode; } else if( IsInList( openList, *currentNode.connections[i] ) && currentNode.gScore < currentNode.connections[i].gScore ) { currentNode.connections[i].gScore = currentNode.gScore + connectionCost; currentNode.connections[i].hScore = abs( currentNode.connections[i].xIndex - end.xIndex ) + abs( currentNode.connections[i].yIndex - end.yIndex ); currentNode.connections[i].fScore = currentNode.connections[i].gScore + currentNode.connections[i].hScore; currentNode.connections[i].parent = &currentNode; } else { currentNode.connections[i].gScore = currentNode.gScore + connectionCost; currentNode.connections[i].hScore = abs( currentNode.connections[i].xIndex - end.xIndex ) + abs( currentNode.connections[i].yIndex - end.yIndex ); currentNode.connections[i].fScore = currentNode.connections[i].gScore + currentNode.connections[i].hScore; currentNode.connections[i].parent = &currentNode; AddToList( openList, *currentNode.connections[i] ); } } } } } writeln( "Current Node Position: ", currentNode.position ); writeln( "End Node Position: ", end.position ); if( currentNode.position == end.position ) { writeln( "Current Node Parent: ", currentNode.parent ); //GeneratePath(); //ReversePath(); } } Node[] GetPath() { return Path; } } This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • LVM / Device Mapper maps wrong device

    - by DaDaDom
    Hi, I run a LVM setup on a raid1 created by mdadm. md2 is based on sda6 (major:minor 8:6) and sdb6 (8:22). md2 is partition 9:2. The VG on top of md2 has 4 LVs, var, home, usr, tmp. First the problem: While booting it seems as if the device mapper takes the wrong partition for the mapping! Immediately after boot the information is like ~# dmsetup table systemlvm-home: 0 4194304 linear 8:22 384 systemlvm-home: 4194304 16777216 linear 8:22 69206400 systemlvm-home: 20971520 8388608 linear 8:22 119538048 systemlvm-home: 29360128 6291456 linear 8:22 243270016 systemlvm-tmp: 0 2097152 linear 8:22 41943424 systemlvm-usr: 0 10485760 linear 8:22 20971904 systemlvm-var: 0 10485760 linear 8:22 10486144 systemlvm-var: 10485760 6291456 linear 8:22 4194688 systemlvm-var: 16777216 4194304 linear 8:22 44040576 systemlvm-var: 20971520 10485760 linear 8:22 31457664 systemlvm-var: 31457280 20971520 linear 8:22 48234880 systemlvm-var: 52428800 33554432 linear 8:22 85983616 systemlvm-var: 85983232 115343360 linear 8:22 127926656 ~# cat /proc/mdstat Personalities : [raid1] md2 : active (auto-read-only) raid1 sda6[0] 151798080 blocks [2/1] [U_] md0 : active raid1 sda1[0] sdb1[1] 96256 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 2931776 blocks [2/2] [UU] I have to manually "lvchange -an" all LVs, add /dev/sdb6 back to the raid and reactivate the LVs, then all is fine. But it prevents me from automounting the partitions and obviously leads to a bunch of other problems. If everything works fine, the information is like ~$ cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb6[1] sda6[0] 151798080 blocks [2/2] [UU] ... ~# dmsetup table systemlvm-home: 0 4194304 linear 9:2 384 systemlvm-home: 4194304 16777216 linear 9:2 69206400 systemlvm-home: 20971520 8388608 linear 9:2 119538048 systemlvm-home: 29360128 6291456 linear 9:2 243270016 systemlvm-tmp: 0 2097152 linear 9:2 41943424 systemlvm-usr: 0 10485760 linear 9:2 20971904 systemlvm-var: 0 10485760 linear 9:2 10486144 systemlvm-var: 10485760 6291456 linear 9:2 4194688 systemlvm-var: 16777216 4194304 linear 9:2 44040576 systemlvm-var: 20971520 10485760 linear 9:2 31457664 systemlvm-var: 31457280 20971520 linear 9:2 48234880 systemlvm-var: 52428800 33554432 linear 9:2 85983616 systemlvm-var: 85983232 115343360 linear 9:2 127926656 I think that LVM for some reason just "takes" /dev/sdb6 which is then missing in the raid. I tried almost all options in the lvm.conf but none seems to work. Below is some more information, like config files. Does anyone have any idea about what is going on here and how to prevent that? If you need any additional information, please let me know Thanks in advance! Dominik The information (off a "repaired" system): ~# cat /etc/debian_version 5.0.4 ~# uname -a Linux kermit 2.6.26-2-686 #1 SMP Wed Feb 10 08:59:21 UTC 2010 i686 GNU/Linux ~# lvm version LVM version: 2.02.39 (2008-06-27) Library version: 1.02.27 (2008-06-25) Driver version: 4.13.0 ~# cat /etc/mdadm/mdadm.conf DEVICE partitions ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=11e9dc6c:1da99f3f:b3088ca6:c6fe60e9 ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=92ed1e4b:897361d3:070682b3:3baa4fa1 ARRAY /dev/md2 level=raid1 num-devices=2 metadata=00.90 UUID=601d4642:39dc80d7:96e8bbac:649924ba ~# mount /dev/md1 on / type ext3 (rw,errors=remount-ro) tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) procbususb on /proc/bus/usb type usbfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620) /dev/md0 on /boot type ext3 (rw) /dev/mapper/systemlvm-usr on /usr type reiserfs (rw) /dev/mapper/systemlvm-tmp on /tmp type reiserfs (rw) /dev/mapper/systemlvm-home on /home type reiserfs (rw) /dev/mapper/systemlvm-var on /var type reiserfs (rw) ~# grep -v ^$ /etc/lvm/lvm.conf | grep -v "#" devices { dir = "/dev" scan = [ "/dev" ] preferred_names = [ ] filter = [ "a|/dev/md.*|", "r/.*/" ] cache_dir = "/etc/lvm/cache" cache_file_prefix = "" write_cache_state = 1 sysfs_scan = 1 md_component_detection = 1 ignore_suspended_devices = 0 } log { verbose = 0 syslog = 1 overwrite = 0 level = 0 indent = 1 command_names = 0 prefix = " " } backup { backup = 1 backup_dir = "/etc/lvm/backup" archive = 1 archive_dir = "/etc/lvm/archive" retain_min = 10 retain_days = 30 } shell { history_size = 100 } global { umask = 077 test = 0 units = "h" activation = 1 proc = "/proc" locking_type = 1 fallback_to_clustered_locking = 1 fallback_to_local_locking = 1 locking_dir = "/lib/init/rw" } activation { missing_stripe_filler = "/dev/ioerror" reserved_stack = 256 reserved_memory = 8192 process_priority = -18 mirror_region_size = 512 readahead = "auto" mirror_log_fault_policy = "allocate" mirror_device_fault_policy = "remove" } :~# vgscan -vvv Processing: vgscan -vvv O_DIRECT will be used Setting global/locking_type to 1 File-based locking selected. Setting global/locking_dir to /lib/init/rw Locking /lib/init/rw/P_global WB Wiping cache of LVM-capable devices /dev/block/1:0: Added to device cache /dev/block/1:1: Added to device cache /dev/block/1:10: Added to device cache /dev/block/1:11: Added to device cache /dev/block/1:12: Added to device cache /dev/block/1:13: Added to device cache /dev/block/1:14: Added to device cache /dev/block/1:15: Added to device cache /dev/block/1:2: Added to device cache /dev/block/1:3: Added to device cache /dev/block/1:4: Added to device cache /dev/block/1:5: Added to device cache /dev/block/1:6: Added to device cache /dev/block/1:7: Added to device cache /dev/block/1:8: Added to device cache /dev/block/1:9: Added to device cache /dev/block/253:0: Added to device cache /dev/block/253:1: Added to device cache /dev/block/253:2: Added to device cache /dev/block/253:3: Added to device cache /dev/block/8:0: Added to device cache /dev/block/8:1: Added to device cache /dev/block/8:16: Added to device cache /dev/block/8:17: Added to device cache /dev/block/8:18: Added to device cache /dev/block/8:19: Added to device cache /dev/block/8:2: Added to device cache /dev/block/8:21: Added to device cache /dev/block/8:22: Added to device cache /dev/block/8:3: Added to device cache /dev/block/8:5: Added to device cache /dev/block/8:6: Added to device cache /dev/block/9:0: Already in device cache /dev/block/9:1: Already in device cache /dev/block/9:2: Already in device cache /dev/bsg/0:0:0:0: Not a block device /dev/bsg/1:0:0:0: Not a block device /dev/bus/usb/001/001: Not a block device [... many more "not a block device"] /dev/core: Not a block device /dev/cpu_dma_latency: Not a block device /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895: Aliased to /dev/block/8:16 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800: Aliased to /dev/block/8:0 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-id/dm-name-systemlvm-home: Aliased to /dev/block/253:2 in device cache /dev/disk/by-id/dm-name-systemlvm-tmp: Aliased to /dev/block/253:3 in device cache /dev/disk/by-id/dm-name-systemlvm-usr: Aliased to /dev/block/253:1 in device cache /dev/disk/by-id/dm-name-systemlvm-var: Aliased to /dev/block/253:0 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr25N7CRZpUMzR18NfS6zeSeAVnVT98LuU: Aliased to /dev/block/253:0 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr3TpFXtLjYGEwn79IdXsSCZPl8AxmqbmQ: Aliased to /dev/block/253:1 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrc5MJ4KolevMjt85PPBrQuRTkXbx6NvTi: Aliased to /dev/block/253:3 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrYXrfdg5OSYDVkNeiQeQksgCI849Z2hx8: Aliased to /dev/block/253:2 in device cache /dev/disk/by-id/md-uuid-11e9dc6c:1da99f3f:b3088ca6:c6fe60e9: Already in device cache /dev/disk/by-id/md-uuid-601d4642:39dc80d7:96e8bbac:649924ba: Already in device cache /dev/disk/by-id/md-uuid-92ed1e4b:897361d3:070682b3:3baa4fa1: Already in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895: Aliased to /dev/block/8:16 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800: Aliased to /dev/block/8:0 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0: Aliased to /dev/block/8:0 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0: Aliased to /dev/block/8:16 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-uuid/13c1262b-e06f-40ce-b088-ce410640a6dc: Aliased to /dev/block/253:3 in device cache /dev/disk/by-uuid/379f57b0-2e03-414c-808a-f76160617336: Aliased to /dev/block/253:2 in device cache /dev/disk/by-uuid/4fb2d6d3-bd51-48d3-95ee-8e404faf243d: Already in device cache /dev/disk/by-uuid/5c6728ec-82c1-49c0-93c5-f6dbd5c0d659: Aliased to /dev/block/8:5 in device cache /dev/disk/by-uuid/a13cdfcd-2191-4185-a727-ffefaf7a382e: Aliased to /dev/block/253:1 in device cache /dev/disk/by-uuid/e0d5893d-ff88-412f-b753-9e3e9af3242d: Aliased to /dev/block/8:21 in device cache /dev/disk/by-uuid/e79c9da6-8533-4e55-93ec-208876671edc: Aliased to /dev/block/253:0 in device cache /dev/disk/by-uuid/f3f176f5-12f7-4af8-952a-c6ac43a6e332: Already in device cache /dev/dm-0: Aliased to /dev/block/253:0 in device cache (preferred name) /dev/dm-1: Aliased to /dev/block/253:1 in device cache (preferred name) /dev/dm-2: Aliased to /dev/block/253:2 in device cache (preferred name) /dev/dm-3: Aliased to /dev/block/253:3 in device cache (preferred name) /dev/fd: Symbolic link to directory /dev/full: Not a block device /dev/hpet: Not a block device /dev/initctl: Not a block device /dev/input/by-path/platform-i8042-serio-0-event-kbd: Not a block device /dev/input/event0: Not a block device /dev/input/mice: Not a block device /dev/kmem: Not a block device /dev/kmsg: Not a block device /dev/log: Not a block device /dev/loop/0: Added to device cache /dev/MAKEDEV: Not a block device /dev/mapper/control: Not a block device /dev/mapper/systemlvm-home: Aliased to /dev/dm-2 in device cache /dev/mapper/systemlvm-tmp: Aliased to /dev/dm-3 in device cache /dev/mapper/systemlvm-usr: Aliased to /dev/dm-1 in device cache /dev/mapper/systemlvm-var: Aliased to /dev/dm-0 in device cache /dev/md0: Already in device cache /dev/md1: Already in device cache /dev/md2: Already in device cache /dev/mem: Not a block device /dev/net/tun: Not a block device /dev/network_latency: Not a block device /dev/network_throughput: Not a block device /dev/null: Not a block device /dev/port: Not a block device /dev/ppp: Not a block device /dev/psaux: Not a block device /dev/ptmx: Not a block device /dev/pts/0: Not a block device /dev/ram0: Aliased to /dev/block/1:0 in device cache (preferred name) /dev/ram1: Aliased to /dev/block/1:1 in device cache (preferred name) /dev/ram10: Aliased to /dev/block/1:10 in device cache (preferred name) /dev/ram11: Aliased to /dev/block/1:11 in device cache (preferred name) /dev/ram12: Aliased to /dev/block/1:12 in device cache (preferred name) /dev/ram13: Aliased to /dev/block/1:13 in device cache (preferred name) /dev/ram14: Aliased to /dev/block/1:14 in device cache (preferred name) /dev/ram15: Aliased to /dev/block/1:15 in device cache (preferred name) /dev/ram2: Aliased to /dev/block/1:2 in device cache (preferred name) /dev/ram3: Aliased to /dev/block/1:3 in device cache (preferred name) /dev/ram4: Aliased to /dev/block/1:4 in device cache (preferred name) /dev/ram5: Aliased to /dev/block/1:5 in device cache (preferred name) /dev/ram6: Aliased to /dev/block/1:6 in device cache (preferred name) /dev/ram7: Aliased to /dev/block/1:7 in device cache (preferred name) /dev/ram8: Aliased to /dev/block/1:8 in device cache (preferred name) /dev/ram9: Aliased to /dev/block/1:9 in device cache (preferred name) /dev/random: Not a block device /dev/root: Already in device cache /dev/rtc: Not a block device /dev/rtc0: Not a block device /dev/sda: Aliased to /dev/block/8:0 in device cache (preferred name) /dev/sda1: Aliased to /dev/block/8:1 in device cache (preferred name) /dev/sda2: Aliased to /dev/block/8:2 in device cache (preferred name) /dev/sda3: Aliased to /dev/block/8:3 in device cache (preferred name) /dev/sda5: Aliased to /dev/block/8:5 in device cache (preferred name) /dev/sda6: Aliased to /dev/block/8:6 in device cache (preferred name) /dev/sdb: Aliased to /dev/block/8:16 in device cache (preferred name) /dev/sdb1: Aliased to /dev/block/8:17 in device cache (preferred name) /dev/sdb2: Aliased to /dev/block/8:18 in device cache (preferred name) /dev/sdb3: Aliased to /dev/block/8:19 in device cache (preferred name) /dev/sdb5: Aliased to /dev/block/8:21 in device cache (preferred name) /dev/sdb6: Aliased to /dev/block/8:22 in device cache (preferred name) /dev/shm/network/ifstate: Not a block device /dev/snapshot: Not a block device /dev/sndstat: stat failed: Datei oder Verzeichnis nicht gefunden /dev/stderr: Not a block device /dev/stdin: Not a block device /dev/stdout: Not a block device /dev/systemlvm/home: Aliased to /dev/dm-2 in device cache /dev/systemlvm/tmp: Aliased to /dev/dm-3 in device cache /dev/systemlvm/usr: Aliased to /dev/dm-1 in device cache /dev/systemlvm/var: Aliased to /dev/dm-0 in device cache /dev/tty: Not a block device /dev/tty0: Not a block device [... many more "not a block device"] /dev/vcsa6: Not a block device /dev/xconsole: Not a block device /dev/zero: Not a block device Wiping internal VG cache lvmcache: initialised VG #orphans_lvm1 lvmcache: initialised VG #orphans_pool lvmcache: initialised VG #orphans_lvm2 Reading all physical volumes. This may take a while... Finding all volume groups /dev/ram0: Skipping (regex) /dev/loop/0: Skipping (sysfs) /dev/sda: Skipping (regex) Opened /dev/md0 RO /dev/md0: size is 192512 sectors Closed /dev/md0 /dev/md0: size is 192512 sectors Opened /dev/md0 RW O_DIRECT /dev/md0: block size is 1024 bytes Closed /dev/md0 Using /dev/md0 Opened /dev/md0 RW O_DIRECT /dev/md0: block size is 1024 bytes /dev/md0: No label detected Closed /dev/md0 /dev/dm-0: Skipping (regex) /dev/ram1: Skipping (regex) /dev/sda1: Skipping (regex) Opened /dev/md1 RO /dev/md1: size is 5863552 sectors Closed /dev/md1 /dev/md1: size is 5863552 sectors Opened /dev/md1 RW O_DIRECT /dev/md1: block size is 4096 bytes Closed /dev/md1 Using /dev/md1 Opened /dev/md1 RW O_DIRECT /dev/md1: block size is 4096 bytes /dev/md1: No label detected Closed /dev/md1 /dev/dm-1: Skipping (regex) /dev/ram2: Skipping (regex) /dev/sda2: Skipping (regex) Opened /dev/md2 RO /dev/md2: size is 303596160 sectors Closed /dev/md2 /dev/md2: size is 303596160 sectors Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes Closed /dev/md2 Using /dev/md2 Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) /dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr) lvmcache: /dev/md2: now in VG systemlvm with 1 mdas lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue. Closed /dev/md2 /dev/dm-2: Skipping (regex) /dev/ram3: Skipping (regex) /dev/sda3: Skipping (regex) /dev/dm-3: Skipping (regex) /dev/ram4: Skipping (regex) /dev/ram5: Skipping (regex) /dev/sda5: Skipping (regex) /dev/ram6: Skipping (regex) /dev/sda6: Skipping (regex) /dev/ram7: Skipping (regex) /dev/ram8: Skipping (regex) /dev/ram9: Skipping (regex) /dev/ram10: Skipping (regex) /dev/ram11: Skipping (regex) /dev/ram12: Skipping (regex) /dev/ram13: Skipping (regex) /dev/ram14: Skipping (regex) /dev/ram15: Skipping (regex) /dev/sdb: Skipping (regex) /dev/sdb1: Skipping (regex) /dev/sdb2: Skipping (regex) /dev/sdb3: Skipping (regex) /dev/sdb5: Skipping (regex) /dev/sdb6: Skipping (regex) Locking /lib/init/rw/V_systemlvm RB Finding volume group "systemlvm" Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mdas /dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr) lvmcache: /dev/md2: now in VG systemlvm with 1 mdas lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue. Using cached label for /dev/md2 Read systemlvm metadata (19) from /dev/md2 at 39936 size 2632 /dev/md2 0: 0 16: home(0:0) /dev/md2 1: 16 24: var(40:0) /dev/md2 2: 40 40: var(0:0) /dev/md2 3: 80 40: usr(0:0) /dev/md2 4: 120 40: var(80:0) /dev/md2 5: 160 8: tmp(0:0) /dev/md2 6: 168 16: var(64:0) /dev/md2 7: 184 80: var(120:0) /dev/md2 8: 264 64: home(16:0) /dev/md2 9: 328 128: var(200:0) /dev/md2 10: 456 32: home(80:0) /dev/md2 11: 488 440: var(328:0) /dev/md2 12: 928 24: home(112:0) /dev/md2 13: 952 206: NULL(0:0) Found volume group "systemlvm" using metadata type lvm2 Read volume group systemlvm from /etc/lvm/backup/systemlvm Unlocking /lib/init/rw/V_systemlvm Closed /dev/md2 Unlocking /lib/init/rw/P_global ~# vgdisplay --- Volume group --- VG Name systemlvm System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 19 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 4 Max PV 0 Cur PV 1 Act PV 1 VG Size 144,75 GB PE Size 128,00 MB Total PE 1158 Alloc PE / Size 952 / 119,00 GB Free PE / Size 206 / 25,75 GB VG UUID rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr ~# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name systemlvm PV Size 144,77 GB / not usable 16,31 MB Allocatable yes PE Size (KByte) 131072 Total PE 1158 Free PE 206 Allocated PE 952 PV UUID ZSAzP5-iBvr-L7jy-wB8T-AiWz-0g3m-HLK66Y :~# lvdisplay --- Logical volume --- LV Name /dev/systemlvm/home VG Name systemlvm LV UUID YXrfdg-5OSY-DVkN-eiQe-Qksg-CI84-9Z2hx8 LV Write Access read/write LV Status available # open 2 LV Size 17,00 GB Current LE 136 Segments 4 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/systemlvm/var VG Name systemlvm LV UUID 25N7CR-ZpUM-zR18-NfS6-zeSe-AVnV-T98LuU LV Write Access read/write LV Status available # open 2 LV Size 96,00 GB Current LE 768 Segments 7 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/systemlvm/usr VG Name systemlvm LV UUID 3TpFXt-LjYG-Ewn7-9IdX-sSCZ-Pl8A-xmqbmQ LV Write Access read/write LV Status available # open 2 LV Size 5,00 GB Current LE 40 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name /dev/systemlvm/tmp VG Name systemlvm LV UUID c5MJ4K-olev-Mjt8-5PPB-rQuR-TkXb-x6NvTi LV Write Access read/write LV Status available # open 2 LV Size 1,00 GB Current LE 8 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3

    Read the article

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Unix sort keys cause performance problems

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields All six fields combine to form a unique key - so that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o output.csv input.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. More details using unix time: sort input.csv -o output.csv = 28 seconds sort -t ',' -k1 input.csv -o output.csv = 28 seconds sort -t ',' -k1,1 input.csv -o output.csv = 64 seconds sort -t ',' -k1,1 -k2,2 input.csv -o output.csv = 194 seconds sort -t ',' -k1,1 -k2,2 -k3,3 input.csv -o output.csv = 328 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 input.csv -o output.csv = 483 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 input.csv -o output.csv = 561 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 input.csv -o output.csv = 660 seconds I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since looks like sort is just picking a bad algorithm. Any suggestions? Testing Improvements using buffer-size: With 2 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 8% improvement with 16MB, but 6% worse with 128MB With 6 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 9% improvement with 16MB. Testing improvements using dictionary order (just 1 run each): sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o output.csv = 235 seconds (21% worse) sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o ouput.csv = 232 seconds (21% worse) conclusion: it makes sense that this would slow the process down, not useful Testing with different file system on SSD - I can't do this on this server now. Testing with code to consolidate adjacent keys: def consolidate_keys(key_fields, key_types): """ Inputs: - key_fields - a list of numbers in quotes: ['1','2','3'] - key_types - a list of types of the key_fields: ['integer','string','integer'] Outputs: - key_fields - a consolidated list: ['1,2','3'] - key_types - a list of types of the consolidated list: ['string','integer'] """ assert(len(key_fields) == len(key_types)) def get_min(val): vals = val.split(',') assert(len(vals) <= 2) return vals[0] def get_max(val): vals = val.split(',') assert(len(vals) <= 2) return vals[len(vals)-1] i = 0 while True: try: if ( (int(get_max(key_fields[i])) + 1) == int(key_fields[i+1]) and key_types[i] == key_types[i+1]): key_fields[i] = '%s,%s' % (get_min(key_fields[i]), key_fields[i+1]) key_types[i] = key_types[i] key_fields.pop(i+1) key_types.pop(i+1) continue i = i+1 except IndexError: break # last entry return key_fields, key_types While this code is just a work-around that'll only apply to cases in which I've got a contiguous set of keys - it speeds up the code by 95% in my worst case scenario.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Import/rip/convert DVD to Adobe Premiere Pro for Mac

    - by alexyu2010
    For those who want to edit their videos, Adobe Premiere Pro will inevitably a good choice, it is a professional, real time, timeline based video editing software application that supports many video editing cards and plug-ins for accelerated processing, additional file format support and video/audio effects. Although Adobe Premiere Pro is said to be for professionals, is not so complicated that a hobbyist can't excel at using it in an hour or so. General file formats supported by Adobe Premiere Pro Up to now, Adobe Creative Suite has released several versions of Adobe Premiere Pro, including Adobe Premiere 1.0, Adobe Premiere 2.0, Adobe Premiere Pro CS3, Adobe Premiere Pro CS4 and the newly published Adobe Premiere Pro CS5. Although I saw diversity in file formats they support, I did find some common file formats supported by all of them, such as AVI, MOV, MPG. Importing DVD, Adobe Premiere Pro says "NO" It is obvious to all of us that Adobe Premiere Pro will never give DVD a hug, and it isn't rare to see that many people are really confused when they want to import their DVDs to Adobe Premiere Pro for editing. What to do? Yes, you may have noticed that, there is only a way out, that is ripping your DVDs to some formats workable with Adobe Premiere Pro natively, and this is what DVD to Adobe Premiere Pro can do. Importing DVD to Adobe Premiere Pro on Mac DVD to Adobe Premiere Pro converter for Mac is the specially designed application for ripping/converting DVD movies, DVD VOB files or DVD clips to Adobe Premiere Pro compatible AVI, MOV, MPG files with either DVD ripping tool and video converting tool within the versatile DVD to Adobe Premiere Pro converter who is a powerful program for dealing with DVD and videos perfectly. Mac DVD to Adobe Premiere Pro converter can work with a wide variety of files including DVD, VOB, AVI, WMV, MPG, MOV, MP4, DV, FLV, MKV, ASF, SWF, HD video for using with other editing tools like iMovie, FCP etc, play on QuickTime, iTunes, put on portable devices like iPod, iPhone, iPad, iRiver, BlackBerry, Gphone, Mobile Phone or upload to webistes such as YouTube, MySpace. DVD to Adobe Premiere Pro converter for Mac can also help you do some basic editing. You can trim, crop your DVD movie or DVD clip, apply special effect to make it more artistic, merge several DVD clips to a single one or tweak the output parameters for video and audio separately to get a better quality rendering. Besides, to get a good common of the process the preview widnows is also available for you.

    Read the article

  • kernel panic after LVM setup

    - by Manuel Sopena Ballesteros
    I broke my webserver... My setup is: VMWare ESXi environemt CPanel installed CentOS release 6.5 (Final) 4 CPUs 2G RAM 2x VM disks 100G each LVM system This was my previous storage settings (the server was working fine at this time): # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 95G 1.4G 88G 2% / tmpfs 939M 0 939M 0% /dev/shm /dev/sdb1 99G 188M 94G 1% /tmp /dev/sda1 485M 54M 407M 12% /boot My web developer asked me to merge /tmp and / disks so this is what I did: Delete /dev/sdb1 partition using fdisk Create a new partition as LVM on /dev/sdb1 using fdisk Create a new physical volume -- pvcreate /dev/sdb1 Extend volume group -- vgextend /dev/sdb1 vg_test01 Extend logical volume -- lvextend -l +100%FREE /dev/vg_test01/lv_root Resize filesystem -- resize2fs /dev/vg_test01/lv_root This is the new configuration: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_test01-lv_root 213G 105G 97G 52% / tmpfs 939M 0 939M 0% /dev/shm /dev/sda1 485M 54M 407M 12% /boot /usr/tmpDSK 4.0G 145M 3.6G 4% /tmp Since I have the new settings my web server is throwing kernel panics quite often (around every 2 days). The message says: INFO: task <taskName>:<pid> blocked for more than 120 seconds. The list of process affected that I can see from the console are: mysqld queueprocd httpd suphp vmtoolsd loop0 auditd The only way I can fix this is reseting (cold reboot) the VM. I don't think it is a hardware issue as sar is not showing any bottleneck: Linux 2.6.32-431.3.1.el6.x86_64 (test01) 08/22/2014 _x86_64_ (4 CPU) 12:00:01 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 26.86 0.01 0.98 0.57 0.00 71.57 12:20:01 AM all 1.78 0.02 1.03 0.08 0.00 97.09 12:30:01 AM all 26.34 0.02 0.85 0.05 0.00 72.74 12:40:01 AM all 27.12 0.01 1.11 1.22 0.00 70.54 12:50:01 AM all 1.59 0.02 0.94 0.13 0.00 97.32 01:00:01 AM all 26.10 0.01 0.77 0.04 0.00 73.07 01:10:01 AM all 27.51 0.01 1.16 0.14 0.00 71.18 01:20:01 AM all 1.80 0.07 1.06 0.08 0.00 96.99 01:30:01 AM all 26.19 0.01 0.78 0.05 0.00 72.96 01:40:01 AM all 26.62 0.02 0.87 0.05 0.00 72.45 01:50:02 AM all 1.35 0.01 0.87 0.02 0.00 97.75 02:00:01 AM all 26.11 0.02 0.69 0.02 0.00 73.17 02:10:01 AM all 26.73 0.02 0.89 0.14 0.00 72.21 02:20:01 AM all 1.45 0.01 0.92 0.04 0.00 97.58 02:30:01 AM all 26.59 0.01 1.06 0.03 0.00 72.31 02:40:01 AM all 26.27 0.01 0.72 0.05 0.00 72.95 02:50:01 AM all 0.86 0.01 0.50 0.09 0.00 98.53 03:00:01 AM all 25.61 0.02 0.39 0.03 0.00 73.96 03:10:01 AM all 26.30 0.08 0.66 0.14 0.00 72.82 03:20:01 AM all 0.81 0.01 0.51 0.04 0.00 98.63 03:30:02 AM all 26.15 0.02 0.53 0.07 0.00 73.24 03:40:01 AM all 26.06 0.01 0.47 0.04 0.00 73.42 03:50:01 AM all 0.96 0.02 0.51 0.03 0.00 98.48 Average: all 17.69 0.02 0.79 0.14 0.00 81.36 06:58:14 AM LINUX RESTART 07:00:01 AM CPU %user %nice %system %iowait %steal %idle 07:10:01 AM all 1.04 0.02 0.57 0.95 0.00 97.42 07:20:02 AM all 0.66 0.01 0.39 0.06 0.00 98.87 07:30:01 AM all 25.71 0.01 0.45 0.16 0.00 73.67 07:40:01 AM all 25.88 0.01 0.35 0.08 0.00 73.68 07:50:01 AM all 1.13 0.02 0.55 0.11 0.00 98.19 As you can see the server became unresponsive at 03.50 AM and I had to reset the VM at 06.58 AM to bring the website up again. I would appreciate any help/assistance to fix this issue. thank you very much

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

  • nested iterator errors

    - by Sean
    //arrayList.h #include<iostream> #include<sstream> #include<string> #include<algorithm> #include<iterator> using namespace std; template<class T> class arrayList{ public: // constructor, copy constructor and destructor arrayList(int initialCapacity = 10); arrayList(const arrayList<T>&); ~arrayList() { delete[] element; } // ADT methods bool empty() const { return listSize == 0; } int size() const { return listSize; } T& get(int theIndex) const; int indexOf(const T& theElement) const; void erase(int theIndex); void insert(int theIndex, const T& theElement); void output(ostream& out) const; // additional method int capacity() const { return arrayLength; } void reverse(); // new defined // iterators to start and end of list class iterator; class seamlessPointer; seamlessPointer begin() { return seamlessPointer(element); } seamlessPointer end() { return seamlessPointer(element + listSize); } // iterator for arrayList class iterator { public: // typedefs required by C++ for a bidirectional iterator typedef bidirectional_iterator_tag iterator_category; typedef T value_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef T& reference; // constructor iterator(T* thePosition = 0) { position = thePosition; } // dereferencing operators T& operator*() const { return *position; } T* operator->() const { return position; } // increment iterator& operator++() // preincrement { ++position; return *this; } iterator operator++(int) // postincrement { iterator old = *this; ++position; return old; } // decrement iterator& operator--() // predecrement { --position; return *this; } iterator operator--(int) // postdecrement { iterator old = *this; --position; return old; } // equality testing bool operator!=(const iterator right) const { return position != right.position; } bool operator==(const iterator right) const { return position == right.position; } protected: T* position; }; // end of iterator class class seamlessPointer: public arrayList<T>::iterator { // constructor seamlessPointer(T *thePosition) { iterator::position = thePosition; } //arithmetic operators seamlessPointer & operator+(int n) { arrayList<T>::iterator::position += n; return *this; } seamlessPointer & operator+=(int n) { arrayList<T>::iterator::position += n; return *this; } seamlessPointer & operator-(int n) { arrayList<T>::iterator::position -= n; return *this; } seamlessPointer & operator-=(int n) { arrayList<T>::iterator::position -= n; return *this; } T& operator[](int n) { return arrayList<T>::iterator::position[n]; } bool operator<(seamlessPointer &rhs) { if(int(arrayList<T>::iterator::position - rhs.position) < 0) return true; return false; } bool operator<=(seamlessPointer & rhs) { if (int(arrayList<T>::iterator::position - rhs.position) <= 0) return true; return false; } bool operator >(seamlessPointer & rhs) { if (int(arrayList<T>::iterator::position - rhs.position) > 0) return true; return false; } bool operator >=(seamlessPointer &rhs) { if (int(arrayList<T>::iterator::position - rhs.position) >= 0) return true; return false; } }; protected: // additional members of arrayList void checkIndex(int theIndex) const; // throw illegalIndex if theIndex invalid T* element; // 1D array to hold list elements int arrayLength; // capacity of the 1D array int listSize; // number of elements in list }; #endif //main.cpp #include<iostream> #include"arrayList.h" #include<fstream> #include<algorithm> #include<string> using namespace std; bool compare_nocase (string first, string second) { unsigned int i=0; while ( (i<first.length()) && (i<second.length()) ) { if (tolower(first[i])<tolower(second[i])) return true; else if (tolower(first[i])>tolower(second[i])) return false; ++i; } if (first.length()<second.length()) return true; else return false; } int main() { ifstream fin; ofstream fout; string str; arrayList<string> dict; fin.open("dictionary"); if (!fin.good()) { cout << "Unable to open file" << endl; return 1; } int k=0; while(getline(fin,str)) { dict.insert(k,str); // cout<<dict.get(k)<<endl; k++; } //sort the array sort(dict.begin, dict.end(),compare_nocase); fout.open("sortedDictionary"); if (!fout.good()) { cout << "Cannot create file" << endl; return 1; } dict.output(fout); fin.close(); return 0; } Two errors are: ..\src\test.cpp: In function 'int main()': ..\src\test.cpp:50:44: error: no matching function for call to 'sort(<unresolved overloaded function type>, arrayList<std::basic_string<char> >::seamlessPointer, bool (&)(std::string, std::string))' ..\src\/arrayList.h: In member function 'arrayList<T>::seamlessPointer arrayList<T>::end() [with T = std::basic_string<char>]': ..\src\test.cpp:50:28: instantiated from here ..\src\/arrayList.h:114:3: error: 'arrayList<T>::seamlessPointer::seamlessPointer(T*) [with T = std::basic_string<char>]' is private ..\src\/arrayList.h:49:44: error: within this context Why do I get these errors? Update I add public: in the seamlessPointer class and change begin to begin() Then I got the following errors: ..\hw3prob2.cpp:50:46: instantiated from here c:\wascana\mingw\bin\../lib/gcc/mingw32/4.5.0/include/c++/bits/stl_algo.h:5250:4: error: no match for 'operator-' in '__last - __first' ..\/arrayList.h:129:21: note: candidate is: arrayList<T>::seamlessPointer& arrayList<T>::seamlessPointer::operator-(int) [with T = std::basic_string<char>, arrayList<T>::seamlessPointer = arrayList<std::basic_string<char> >::seamlessPointer] c:\wascana\mingw\bin\../lib/gcc/mingw32/4.5.0/include/c++/bits/stl_algo.h:5252:4: instantiated from 'void std::sort(_RAIter, _RAIter, _Compare) [with _RAIter = arrayList<std::basic_string<char> >::seamlessPointer, _Compare = bool (*)(std::basic_string<char>, std::basic_string<char>)]' ..\hw3prob2.cpp:50:46: instantiated from here c:\wascana\mingw\bin\../lib/gcc/mingw32/4.5.0/include/c++/bits/stl_algo.h:2190:7: error: no match for 'operator-' in '__last - __first' ..\/arrayList.h:129:21: note: candidate is: arrayList<T>::seamlessPointer& arrayList<T>::seamlessPointer::operator-(int) [with T = std::basic_string<char>, arrayList<T>::seamlessPointer = arrayList<std::basic_string<char> >::seamlessPointer] Then I add operator -() in the seamlessPointer class ptrdiff_t operator -(seamlessPointer &rhs) { return (arrayList<T>::iterator::position - rhs.position); } Then I compile successfully. But when I run it, I found memeory can not read error. I debug and step into and found the error happens in stl function template<typename _RandomAccessIterator, typename _Distance, typename _Tp, typename _Compare> void __adjust_heap(_RandomAccessIterator __first, _Distance __holeIndex, _Distance __len, _Tp __value, _Compare __comp) { const _Distance __topIndex = __holeIndex; _Distance __secondChild = __holeIndex; while (__secondChild < (__len - 1) / 2) { __secondChild = 2 * (__secondChild + 1); if (__comp(*(__first + __secondChild), *(__first + (__secondChild - 1)))) __secondChild--; *(__first + __holeIndex) = _GLIBCXX_MOVE(*(__first + __secondChild)); ////// stop here __holeIndex = __secondChild; } Of course, there must be something wrong with the customized operators of iterator. Does anyone know the possible reason? Thank you.

    Read the article

  • word wrap in tcpdf

    - by ChuckO
    I'm using tcpdf to creat a pdf version of the html table below. How do I word wrap the text in the cells? <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <style type="text/css"> table.frm { width: 960px; Height:400px; margin-left: auto; margin-right: auto; border-width: 0px 0px 0px 0px; border-spacing: 0px; border-style: solid solid solid solid; border-color: gray gray gray gray; border-collapse: collapse; background-color: white; font-family: Verdana,Arial,Helvetica,sans-serif; font-size: 11px; } table.frm th { Width: 120px; border-width: 1px 1px 1px 1px; padding: 1px 1px 1px 1px; border-style: solid solid solid solid; border-collapse: collapse; border-color: gray gray gray gray; background-color: white; } table.frm td { width: 120px; height: 80px; vertical-align: top; border-width: 1px 1px 1px 1px; padding: 2px 2px 2px 2px; border-style: solid solid solid solid; border-collapse: collapse; border-color: gray gray gray gray; background-color: white; } </style> <title>Weekly Menu</title> </head> <body> <table class="frm"> <tr> <th align="center" colspan="8"><b>WEEKLY MENU</b></th> </tr> <tr> <th align="center" colspan="8"><b>Your Name Here</b></th> </tr> <tr> <th></th> <th>Monday</th> <th>Tuesday</th> <th>Wednesday</th> <th>Thursday</th> <th>Friday</th> <th>Saturday</th> <th>Sunday</th> </tr> <tr> <td><b>Breakfast</b></td> <td>Scrambled Eggs Black Coffee</td> <td>Vegetable Omelet Black Coffee</td> <td>2 slices Toast Black Coffee</td> <td>Cereal w/milk Black Coffee</td> <td>Orange Juice Black Coffee</td> <td>Cereal w/milk Black Coffee</td> <td>Pancakes w/syrup Black Coffee</td> </tr> <tr> <td><b>Lunch</b></td> <td>Tuna Salad Sandwich Diet Coke</td> <td>Greek Salad Black Coffee</td> <td></td> <td>Amer Cheese Sandwich Orange Juice</td> <td></td> <td></td> <td></td> </tr> <tr> <td><b>Dinner</b></td> <td>Burger Fried Onions Diet Coke</td> <td>Steak Fries Diet Sprite</td> <td></td> <td>Chicken Cutlet Baked Potato Peas</td> <td></td> <td></td> <td></td> </tr> <tr> <td><b>Snack</b></td> <td>Apple</td> <td>Orange</td> <td>Sm bag of chips</td> <td>Celery Sticks</td> <td></td> <td></td> <td></td> </tr> </table> </body> </html> This is the tcpdf code: $pdf = new TCPDF('Landscape', 'mm', '', true, 'UTF-8', false); $pdf->SetTitle('Weekly Menu'); $pdf->SetMargins(15, 7.5, 12.5); $pdf->SetAutoPageBreak(TRUE, PDF_MARGIN_BOTTOM); $pdf->SetPrintHeader(false); $pdf->SetPrintFooter(false); $pdf->AddPage(); $pdf->setFormDefaultProp(array('lineWidth'=>0, 'borderStyle'=>'dot', 'fillColor'=>array(235, 235, 255), 'strokeColor'=>array(255,255,250))); $pdf->SetFont('times', 'BU', 12); $pdf->cell(250, 8, 'Weekly Menu', 0, 1, 'C'); $pdf->cell(250, 8, $yourname, 0, 1, 'C'); $pdf->SetFont('times', '', 10); $cw=35; $ch=25; $pdf->SetXY(15,50); $pdf->cell(25,5,'',1,0,'L'); $pdf->cell($cw,5,$day1,1,0,'C'); $pdf->cell($cw,5,$day2,1,0,'C'); $pdf->cell($cw,5,$day3,1,0,'C'); $pdf->cell($cw,5,$day4,1,0,'C'); $pdf->cell($cw,5,$day5,1,0,'C'); $pdf->cell($cw,5,$day6,1,0,'C'); $pdf->cell($cw,5,$day7,1,1,'C'); $pdf->cell(25,$ch,'Breakfast',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->breakfast,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->breakfast,1,1,'L',0,0,false,'','T'); $pdf->cell(25,$ch,'Lunch',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->lunch,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->lunch,1,1,'L',0,0,false,'','T'); $pdf->cell(25,$ch,'Dinner',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->dinner,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->dinner,1,1,'L',0,0,false,'','T'); $pdf->cell(25,$ch,'Snack',1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[0]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[1]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[2]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[3]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[4]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[5]->snack,1,0,'L',0,0,false,'','T'); $pdf->cell($cw,$ch,$record[6]->snack,1,1,'L',0,0,false,'','T'); EOD;

    Read the article

  • Troubleshoot Perl module installation on Mac OS X

    - by Daniel Standage
    I'm trying to install the Perl module Set::IntervalTree on Mac OS X. I recently installed it today on an Ubuntu box with no problem. I simply started cpan, entered install Set:IntervalTree, and it all worked out. However, the installation failed on Mac OS X--it spits out a huge list of compiler errors (below). How would I troubleshoot this. I don't even know where to begin. cpan[1]> install Set::IntervalTree CPAN: Storable loaded ok (v2.18) Going to read /Users/standage/.cpan/Metadata Database was generated on Fri, 14 Jan 2011 02:58:42 GMT CPAN: YAML loaded ok (v0.72) Going to read /Users/standage/.cpan/build/ ............................................................................DONE Found 1 old build, restored the state of 1 Running install for module 'Set::IntervalTree' Running make for B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz CPAN: Digest::SHA loaded ok (v5.45) CPAN: Compress::Zlib loaded ok (v2.008) Checksum for /Users/standage/.cpan/sources/authors/id/B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz ok Scanning cache /Users/standage/.cpan/build for sizes ............................................................................DONE x Set-IntervalTree-0.01/ x Set-IntervalTree-0.01/src/ x Set-IntervalTree-0.01/src/Makefile x Set-IntervalTree-0.01/src/interval_tree.h x Set-IntervalTree-0.01/src/test_main.cc x Set-IntervalTree-0.01/lib/ x Set-IntervalTree-0.01/lib/Set/ x Set-IntervalTree-0.01/lib/Set/IntervalTree.pm x Set-IntervalTree-0.01/Changes x Set-IntervalTree-0.01/MANIFEST x Set-IntervalTree-0.01/t/ x Set-IntervalTree-0.01/t/Set-IntervalTree.t x Set-IntervalTree-0.01/typemap x Set-IntervalTree-0.01/perlobject.map x Set-IntervalTree-0.01/IntervalTree.xs x Set-IntervalTree-0.01/Makefile.PL x Set-IntervalTree-0.01/README x Set-IntervalTree-0.01/META.yml CPAN: File::Temp loaded ok (v0.18) CPAN.pm: Going to build B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz Checking if your kit is complete... Looks good Writing Makefile for Set::IntervalTree cp lib/Set/IntervalTree.pm blib/lib/Set/IntervalTree.pm AutoSplitting blib/lib/Set/IntervalTree.pm (blib/lib/auto/Set/IntervalTree) /usr/bin/perl /System/Library/Perl/5.10.0/ExtUtils/xsubpp -C++ -typemap /System/Library/Perl/5.10.0/ExtUtils/typemap -typemap perlobject.map -typemap typemap IntervalTree.xs > IntervalTree.xsc && mv IntervalTree.xsc IntervalTree.c g++ -c -Isrc -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -g -O0 -DVERSION=\"0.01\" -DXS_VERSION=\"0.01\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" -Isrc IntervalTree.c In file included from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/bits/locale_facets.h:4420:40: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/bits/locale_facets.h:4467:34: error: macro "do_close" requires 2 arguments, but only 1 given /usr/include/c++/4.2.1/bits/locale_facets.h:4486:55: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/bits/locale_facets.h:4513:23: error: macro "do_close" requires 2 arguments, but only 1 given In file included from /usr/include/c++/4.2.1/bits/locale_facets.h:4599, from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:58:38: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:67:71: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:78:39: error: macro "do_close" requires 2 arguments, but only 1 given In file included from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/bits/locale_facets.h:4486: error: ‘do_open’ declared as a ‘virtual’ field /usr/include/c++/4.2.1/bits/locale_facets.h:4486: error: expected ‘;’ before ‘const’ /usr/include/c++/4.2.1/bits/locale_facets.h:4513: error: variable or field ‘do_close’ declared void /usr/include/c++/4.2.1/bits/locale_facets.h:4513: error: expected ‘;’ before ‘const’ In file included from /usr/include/c++/4.2.1/bits/locale_facets.h:4599, from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:67: error: expected initializer before ‘const’ /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:78: error: expected initializer before ‘const’ In file included from IntervalTree.xs:19: src/interval_tree.h:95: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class _Alloc> class std::vector’ src/interval_tree.h:95: error: expected a type, got ‘IntervalTree<T,N>::it_recursion_node’ src/interval_tree.h:95: error: template argument 2 is invalid src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree()’: src/interval_tree.h:130: error: expected type-specifier src/interval_tree.h:130: error: expected `;' src/interval_tree.h:135: error: expected type-specifier src/interval_tree.h:135: error: expected `;' src/interval_tree.h:141: error: request for member ‘push_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h: In member function ‘void IntervalTree<T, N>::LeftRotate(IntervalTree<T, N>::Node*)’: src/interval_tree.h:178: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘void IntervalTree<T, N>::RightRotate(IntervalTree<T, N>::Node*)’: src/interval_tree.h:240: error: ‘x’ was not declared in this scope src/interval_tree.h: In member function ‘void IntervalTree<T, N>::TreeInsertHelp(IntervalTree<T, N>::Node*)’: src/interval_tree.h:298: error: ‘x’ was not declared in this scope src/interval_tree.h:299: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N)’: src/interval_tree.h:375: error: ‘y’ was not declared in this scope src/interval_tree.h:376: error: ‘x’ was not declared in this scope src/interval_tree.h:377: error: ‘newNode’ was not declared in this scope src/interval_tree.h:379: error: expected type-specifier src/interval_tree.h:379: error: expected `;' src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::GetSuccessorOf(IntervalTree<T, N>::Node*) const’: src/interval_tree.h:450: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::GetPredecessorOf(IntervalTree<T, N>::Node*) const’: src/interval_tree.h:483: error: ‘y’ was not declared in this scope src/interval_tree.h: In destructor ‘IntervalTree<T, N>::~IntervalTree()’: src/interval_tree.h:546: error: ‘x’ was not declared in this scope src/interval_tree.h:547: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class _Alloc> class std::vector’ src/interval_tree.h:547: error: expected a type, got ‘(IntervalTree<T,N>::Node * <expression error>)’ src/interval_tree.h:547: error: template argument 2 is invalid src/interval_tree.h:547: error: invalid type in declaration before ‘;’ token src/interval_tree.h:551: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:554: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:557: error: request for member ‘empty’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:558: error: request for member ‘back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:559: error: request for member ‘pop_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:561: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:564: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h: In member function ‘void IntervalTree<T, N>::DeleteFixUp(IntervalTree<T, N>::Node*)’: src/interval_tree.h:613: error: ‘w’ was not declared in this scope src/interval_tree.h:614: error: ‘rootLeft’ was not declared in this scope src/interval_tree.h: In member function ‘T IntervalTree<T, N>::remove(IntervalTree<T, N>::Node*)’: src/interval_tree.h:697: error: ‘y’ was not declared in this scope src/interval_tree.h:698: error: ‘x’ was not declared in this scope src/interval_tree.h: In member function ‘std::vector<T, std::allocator<_CharT> > IntervalTree<T, N>::fetch(N, N)’: src/interval_tree.h:819: error: ‘x’ was not declared in this scope src/interval_tree.h:833: error: invalid types ‘int[size_t]’ for array subscript src/interval_tree.h:836: error: request for member ‘push_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:837: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:838: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:839: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:840: error: request for member ‘size’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:846: error: request for member ‘size’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:847: error: expected `;' before ‘back’ src/interval_tree.h:848: error: request for member ‘pop_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:850: error: ‘back’ was not declared in this scope src/interval_tree.h:853: error: invalid types ‘int[size_t]’ for array subscript IntervalTree.c: In function ‘void boot_Set__IntervalTree(PerlInterpreter*, CV*)’: IntervalTree.c:365: warning: deprecated conversion from string constant to ‘char*’ src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:67: instantiated from here src/interval_tree.h:130: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h:135: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... src/interval_tree.h:848: error: request for member ‘pop_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:850: error: ‘back’ was not declared in this scope src/interval_tree.h:853: error: invalid types ‘int[size_t]’ for array subscript IntervalTree.c: In function ‘void boot_Set__IntervalTree(PerlInterpreter*, CV*)’: IntervalTree.c:365: warning: deprecated conversion from string constant to ‘char*’ src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:67: instantiated from here src/interval_tree.h:130: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h:135: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.xs:57: instantiated from here src/interval_tree.h:375: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:375: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:376: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:376: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:377: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:377: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘std::vector<T, std::allocator<_CharT> > IntervalTree<T, N>::fetch(N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.xs:65: instantiated from here src/interval_tree.h:819: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:819: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant IntervalTree.xs:65: instantiated from here src/interval_tree.h:847: error: dependent-name ‘IntervalTree<T,N>::it_recursion_node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:847: note: say ‘typename IntervalTree<T,N>::it_recursion_node’ if a type is meant src/interval_tree.h: In destructor ‘IntervalTree<T, N>::~IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:205: instantiated from here src/interval_tree.h:546: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:546: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::TreeInsertHelp(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:380: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:298: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:298: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:299: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:299: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::LeftRotate(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:395: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:178: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:178: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::RightRotate(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:399: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:240: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:240: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant lipo: can't open input file: /var/tmp//ccLthuaw.out (No such file or directory) make: *** [IntervalTree.o] Error 1 BENBOOTH/Set-IntervalTree-0.01.tar.gz make -- NOT OK Running make test Can't test without successful make Running make install Make had returned bad status, install seems impossible Failed during this command: BENBOOTH/Set-IntervalTree-0.01.tar.gz : make NO

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • Restore single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Application log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated. UPDATE I have successfully proven I can restore the DB into a recovery storage group in my VM Unfortunately due to the actual account being on a different store I am unable to do the recovery... Error is The attempt to log on to the Microsoft Exchange Server computer has failed. The MAPI provider failed. Microsoft Exchange Server Information Store ID no: 8004011d-0512-00000000 Two questions QUESTION 1 Should I repeat my steps on the production exchange server in the recover storage group? then merge into her original account? I am just concerned with doing recovery like that on the live server.... QUESTION 2 Is there any way I can extract her .PST from my recovery VM and then import into her outlook? On the Recovery VM: I restored the raw DB from my full backup repaired it with ESEUTIL then mounted in the recovery store. Was thinking I could just repeat and mount in the main store on the VM? Thanks for the suggestions.

    Read the article

  • Resizing Partitions on Live RHEL/cPanel Server

    - by Timothy R. Butler
    I've resized many partitions over the years on Linux, Windows and Mac OS X -- but always using a GUI. However, the time has come where the preset partition sizes my data center placed on my server aren't the right sizes and I need to resize a production server's disks. I could fiddle with it and probably do OK, but given that it is a production server, I wanted to get some advice about the right way to do this. I do have KVM over IP access, so if it is best to take the server offline and boot off a rescue partition, I can do that. root [/var/lib/mysql]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 2.1G 7.3G 23% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 99M 77M 18M 82% /boot /dev/sda8 884G 463G 376G 56% /home /dev/sda3 9.9G 8.0G 1.5G 85% /usr /dev/sda5 9.9G 9.1G 308M 97% /var /usr/tmpDSK 2.0G 38M 1.8G 3% /tmp As you can see /var and /usr are quite close to being full and I've actually had to symlink some logs on /usr to directories in /home to balance things out. What I would like to do is to add 6-10 GB each to /usr and /var, presumably taking the space from /home. As I think about how the disk is arranged, the best thought I've come up with is to reduce /home by 16 GB, say, and move /var to the spot freed up, then allocating /var's space to /usr. However, that would put /var at the far end of the disk, which seems less than idea, given that MySQL has all of its data on that partition. I'd love to take the space out of the closer end of /usr, but I assume that would take a very arduous (and perhaps risky) process of moving all of the data in /usr around. I seem to recall having such a process fail for me on a computer in the past. The other option might be to merge / and /usr since / is underutilized, though I'm not sure if that's a good idea. Do you have any suggestions both on the best reallocation plan and the commands to use to accomplish it? UPDATE: I should add -- here's the partition table. There's one unused partition, which, if memory serves, was the original tmp location before I created a tmp image: Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ Unusable 1.05* sda1 Boot Primary Linux ext2 106.96* sda2 Primary Linux ext3 10737.42* sda3 Primary Linux ext3 10737.42* sda5 NC Logical Linux ext3 10738.47* sda6 NC Logical Linux swap / Solaris 2148.54* sda7 NC Logical Linux ext3 1074.80* sda8 NC Logical Linux ext3 964098.53*

    Read the article

  • Conflict between two Javascripts (MailChimp validation etc. scripts & jQuery hSlides.js)

    - by Brian
    I have two scripts running on the same page, one is the jQuery.hSlides.js script http://www.jesuscarrera.info/demos/hslides/ and the other is a custom script that is used for MailChimp list signup integration. The hSlides panel can be seen in effect here: http://theatricalbellydance.com. I've turned off the MailChimp script because it was conflicting with the hSlides script, causing it not to to fail completely (as seen here http://theatricalbellydance.com/home2/). Can someone tell me what could be done to the hSlides script to stop the conflict with the MailChimp script? The MailChimp Script var fnames = new Array(); var ftypes = new Array(); fnames[0] = 'EMAIL'; ftypes[0] = 'email'; fnames[3] = 'MMERGE3'; ftypes[3] = 'text'; fnames[1] = 'FNAME'; ftypes[1] = 'text'; fnames[2] = 'LNAME'; ftypes[2] = 'text'; fnames[4] = 'MMERGE4'; ftypes[4] = 'address'; fnames[6] = 'MMERGE6'; ftypes[6] = 'number'; fnames[9] = 'MMERGE9'; ftypes[9] = 'text'; fnames[5] = 'MMERGE5'; ftypes[5] = 'text'; fnames[7] = 'MMERGE7'; ftypes[7] = 'text'; fnames[8] = 'MMERGE8'; ftypes[8] = 'text'; fnames[10] = 'MMERGE10'; ftypes[10] = 'text'; fnames[11] = 'MMERGE11'; ftypes[11] = 'text'; fnames[12] = 'MMERGE12'; ftypes[12] = 'text'; var err_style = ''; try { err_style = mc_custom_error_style; } catch (e) { err_style = 'margin: 1em 0 0 0; padding: 1em 0.5em 0.5em 0.5em; background: rgb(255, 238, 238) none repeat scroll 0% 0%; font-weight: bold; float: left; z-index: 1; width: 80%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; color: rgb(255, 0, 0);'; } var mce_jQuery = jQuery.noConflict(); mce_jQuery(document).ready(function ($) { var options = { errorClass: 'mce_inline_error', errorElement: 'div', errorStyle: err_style, onkeyup: function () {}, onfocusout: function () {}, onblur: function () {} }; var mce_validator = mce_jQuery("#mc-embedded-subscribe-form").validate(options); options = { url: 'http://theatricalbellydance.us1.list-manage.com/subscribe/post-json?u=1d127e7630ced825cb1a8b5a9&id=9f12d2a6bb&c=?', type: 'GET', dataType: 'json', contentType: "application/json; charset=utf-8", beforeSubmit: function () { mce_jQuery('#mce_tmp_error_msg').remove(); mce_jQuery('.datefield', '#mc_embed_signup').each(function () { var txt = 'filled'; var fields = new Array(); var i = 0; mce_jQuery(':text', this).each(function () { fields[i] = this; i++; }); mce_jQuery(':hidden', this).each(function () { if (fields[0].value == 'MM' && fields[1].value == 'DD' && fields[2].value == 'YYYY') { this.value = ''; } else if (fields[0].value == '' && fields[1].value == '' && fields[2].value == '') { this.value = ''; } else { this.value = fields[0].value + '/' + fields[1].value + '/' + fields[2].value; } }); }); return mce_validator.form(); }, success: mce_success_cb }; mce_jQuery('#mc-embedded-subscribe-form').ajaxForm(options); }); function mce_success_cb(resp) { mce_jQuery('#mce-success-response').hide(); mce_jQuery('#mce-error-response').hide(); if (resp.result == "success") { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(resp.msg); mce_jQuery('#mc-embedded-subscribe-form').each(function () { this.reset(); }); } else { var index = -1; var msg; try { var parts = resp.msg.split(' - ', 2); if (parts[1] == undefined) { msg = resp.msg; } else { i = parseInt(parts[0]); if (i.toString() == parts[0]) { index = parts[0]; msg = parts[1]; } else { index = -1; msg = resp.msg; } } } catch (e) { index = -1; msg = resp.msg; } try { if (index == -1) { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } else { err_id = 'mce_tmp_error_msg'; html = '<div id="' + err_id + '" style="' + err_style + '"> ' + msg + '</div>'; var input_id = '#mc_embed_signup'; var f = mce_jQuery(input_id); if (ftypes[index] == 'address') { input_id = '#mce-' + fnames[index] + '-addr1'; f = mce_jQuery(input_id).parent().parent().get(0); } else if (ftypes[index] == 'date') { input_id = '#mce-' + fnames[index] + '-month'; f = mce_jQuery(input_id).parent().parent().get(0); } else { input_id = '#mce-' + fnames[index]; f = mce_jQuery().parent(input_id).get(0); } if (f) { mce_jQuery(f).append(html); mce_jQuery(input_id).focus(); } else { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } } } catch (e) { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } } } The hslides script: /* * hSlides (1.0) // 2008.02.25 // <http://plugins.jquery.com/project/hslides> * * REQUIRES jQuery 1.2.3+ <http://jquery.com/> * * Copyright (c) 2008 TrafficBroker <http://www.trafficbroker.co.uk> * Licensed under GPL and MIT licenses * * hSlides is an horizontal accordion navigation, sliding the panels around to reveal one of interest. * * Sample Configuration: * // this is the minimum configuration needed * $('#accordion').hSlides({ * totalWidth: 730, * totalHeight: 140, * minPanelWidth: 87, * maxPanelWidth: 425 * }); * * Config Options: * // Required configuration * totalWidth: Total width of the accordion // default: 0 * totalHeight: Total height of the accordion // default: 0 * minPanelWidth: Minimum width of the panel (closed) // default: 0 * maxPanelWidth: Maximum width of the panel (opened) // default: 0 * // Optional configuration * midPanelWidth: Middle width of the panel (centered) // default: 0 * speed: Speed for the animation // default: 500 * easing: Easing effect for the animation. Other than 'swing' or 'linear' must be provided by plugin // default: 'swing' * sensitivity: Sensitivity threshold (must be 1 or higher) // default: 3 * interval: Milliseconds for onMouseOver polling interval // default: 100 * timeout: Milliseconds delay before onMouseOut // default: 300 * eventHandler: Event to open panels: click or hover. For the hover option requires hoverIntent plugin <http://cherne.net/brian/resources/jquery.hoverIntent.html> // default: 'click' * panelSelector: HTML element storing the panels // default: 'li' * activeClass: CSS class for the active panel // default: none * panelPositioning: Accordion panelPositioning: top -> first panel on the bottom and next on the top, other value -> first panel on the top and next to the bottom // default: 'top' * // Callback funtctions. Inside them, we can refer the panel with $(this). * onEnter: Funtion raised when the panel is activated. // default: none * onLeave: Funtion raised when the panel is deactivated. // default: none * * We can override the defaults with: * $.fn.hSlides.defaults.easing = 'easeOutCubic'; * * @param settings An object with configuration options * @author Jesus Carrera <[email protected]> */ (function($) { $.fn.hSlides = function(settings) { // override default configuration settings = $.extend({}, $.fn.hSlides.defaults, settings); // for each accordion return this.each(function(){ var wrapper = this; var panelLeft = 0; var panels = $(settings.panelSelector, wrapper); var panelPositioning = 1; if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).length - 1) * settings.minPanelWidth; panels = $(settings.panelSelector, wrapper).reverse(); panelPositioning = -1; } // necessary styles for the wrapper $(this).css('position', 'relative').css('overflow', 'hidden').css('width', settings.totalWidth).css('height', settings.totalHeight); // set the initial position of the panels var zIndex = 0; panels.each(function(){ // necessary styles for the panels $(this).css('position', 'absolute').css('left', panelLeft).css('zIndex', zIndex).css('height', settings.totalHeight).css('width', settings.maxPanelWidth); zIndex ++; // if this panel is the activated by default, set it as active and move the next (to show this one) if ($(this).hasClass(settings.activeClass)){ $.data($(this)[0], 'active', true); if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).index(this) + 1) * settings.minPanelWidth - settings.maxPanelWidth; }else{ panelLeft = panelLeft + settings.maxPanelWidth; } }else{ // check if we are centering and some panel is active // this is why we can't add/remove the active class in the callbacks: positioning the panels if we have one active if (settings.midPanelWidth && $(settings.panelSelector, wrapper).hasClass(settings.activeClass) == false){ panelLeft = panelLeft + settings.midPanelWidth * panelPositioning; }else{ panelLeft = panelLeft + settings.minPanelWidth * panelPositioning; } } }); // iterates through the panels setting the active and changing the position var movePanels = function(){ // index of the new active panel var activeIndex = $(settings.panelSelector, wrapper).index(this); // iterate all panels panels.each(function(){ // deactivate if is the active if ( $.data($(this)[0], 'active') == true ){ $.data($(this)[0], 'active', false); $(this).removeClass(settings.activeClass).each(settings.onLeave); } // set position of current panel var currentIndex = $(settings.panelSelector, wrapper).index(this); panelLeft = settings.minPanelWidth * currentIndex; // if the panel is next to the active, we need to add the opened width if ( (currentIndex * panelPositioning) > (activeIndex * panelPositioning)){ panelLeft = panelLeft + (settings.maxPanelWidth - settings.minPanelWidth) * panelPositioning; } // animate $(this).animate({left: panelLeft}, settings.speed, settings.easing); }); // activate the new active panel $.data($(this)[0], 'active', true); $(this).addClass(settings.activeClass).each(settings.onEnter); }; // center the panels if configured var centerPanels = function(){ var panelLeft = 0; if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).length - 1) * settings.minPanelWidth; } panels.each(function(){ $(this).removeClass(settings.activeClass).animate({left: panelLeft}, settings.speed, settings.easing); if ($.data($(this)[0], 'active') == true){ $.data($(this)[0], 'active', false); $(this).each(settings.onLeave); } panelLeft = panelLeft + settings.midPanelWidth * panelPositioning ; }); }; // event handling if(settings.eventHandler == 'click'){ $(settings.panelSelector, wrapper).click(movePanels); }else{ var configHoverPanel = { sensitivity: settings.sensitivity, interval: settings.interval, over: movePanels, timeout: settings.timeout, out: function() {} } var configHoverWrapper = { sensitivity: settings.sensitivity, interval: settings.interval, over: function() {}, timeout: settings.timeout, out: centerPanels } $(settings.panelSelector, wrapper).hoverIntent(configHoverPanel); if (settings.midPanelWidth != 0){ $(wrapper).hoverIntent(configHoverWrapper); } } }); }; // invert the order of the jQuery elements $.fn.reverse = function(){ return this.pushStack(this.get().reverse(), arguments); }; // default settings $.fn.hSlides.defaults = { totalWidth: 0, totalHeight: 0, minPanelWidth: 0, maxPanelWidth: 0, midPanelWidth: 0, speed: 500, easing: 'swing', sensitivity: 3, interval: 100, timeout: 300, eventHandler: 'click', panelSelector: 'li', activeClass: false, panelPositioning: 'top', onEnter: function() {}, onLeave: function() {} }; })(jQuery);

    Read the article

  • Apache proxy pass in nginx

    - by summerbulb
    I have the following configuration in Apache: RewriteEngine On #APP ProxyPass /abc/ http://remote.com/abc/ ProxyPassReverse /abc/ http://remote.com/abc/ #APP2 ProxyPass /efg/ http://remote.com/efg/ ProxyPassReverse /efg/ http://remote.com/efg/ I am trying to have the same configuration in nginx. After reading some links, this is what I have : server { listen 8081; server_name localhost; proxy_redirect http://localhost:8081/ http://remote.com/; location ^~/abc/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/abc/; } location ^~/efg/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/efg/; } } I already have the following configuration: server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } location ^~/myAPP { alias path/to/app; index main.html; } location ^~/myAPP/images { alias another/path/to/images autoindex on; } } The idea here is to overcome a same-origin-policy problem. The main pages are on localhost:8080 but we need ajax calls to http://remote.com/abc. Both domains are under my control. Using the above configuration, the ajax calls either don't reach the remote server or get cut off because of the cross origin. The above solution worked in Apache and isn't working in nginx, so I am assuming it's a configuration problem. I think there is an implicit question here: should I have two server declarations or should I somehow merge them into one? EDIT: Added some more information EDIT2: I've moved all the proxy_pass configuration into the main server declaration and changed all the ajax calls to go through port 8080. I am now getting a new error: 502 Connection reset by peer. Wireshark shows packets going out to http://remote.com with a bad IP header checksum.

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >