Search Results

Search found 1273 results on 51 pages for 'vertex shader'.

Page 45/51 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • How to optimise mesh data

    - by Wardy
    So i have some procedurally generated mesh data and i want to reduce it down to its minimum number of verts. In case it matters this is a unity project. Working on the basis of a simple example, lets assume a typical flat surface of points 2 by 3. The point / vertex at [1,1] is used in many triangles. I've generated mesh for a voxel type engine that adds verts to a list based on face visiblility and now I want to remove all the duplicates. Can anyone come up with an efficient way of doing this because what i have is sooo bad its not even funny (and i don't even think it's logically correct) ... private void Optimize() { Vector3 v; Vector3 v2; for (int i = 0; i < Vertices.Count; i++) { v = Vertices[i]; for (int j = i+1; j < Vertices.Count; j++) { v2 = Vertices[j]; if (v.x == v2.x && v.y == v2.y && v.z == v2.z) { for (int ind = 0; ind < Indices.Count; ind++) { if (Indices[ind] == j) { Indices[ind] = i; } else if (Indices[ind] > j && Indices[ind] > 0) Indices[ind]--; } Vertices.RemoveAt(j); Uvs.RemoveAt(j); Normals.RemoveAt(j); } } } } EDIT: Ok i managed to get this (code sample above updated) to render an "optimised" set of verts but the UV data is all wrong now, which would make sense because i'm basically just removing any UV Vector that represents a UV coord for a removed vert and not actually considering what I need to do to "fix the tri" so to speak. The code now seemingly does work but its quite time consuming, still looking to further optimise.

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Glm Vector Transformations [duplicate]

    - by Reanimation
    This question already has an answer here: Car-like Physics - Basic Maths to Simulate Steering 2 answers I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Really weird GL Behaviour, uniform not "hitting" proper mesh? LibGdx

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Reduce power consumption of gaming computer while idle

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 clocked @ 3.3 Ghz 3x DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 oc'd at 800/1600/1850 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) I've already seen this post Reduce power use on computer and this post How do I lower power consumption of my computer and while useful, I'm looking for answers specific to my build and OS. I'm pretty sure this build is a energy-intensive build by default, but I want to try to reduce the amount of energy my build uses when I leave it idle (when I go to bed or go out, etc). The first requirement for this machine is that I need to leave it on, so I cannot turn it off while it's being unused. I run it as a file server for personal reasons and I also leave it on in case people leave me messages on various IM services and chat clients (IRC, MSN, Steam, XFire, Pidgin, etc). I'm also unable to replace the parts in my computer with a cheaper "greener" part. What are some ways to minimize the amount of power the machine uses? I'm already using a high efficiency power supply (80 Plus Gold), but I imagine there's other things that can be done in the BIOS and Windows' power settings to reduce power usage while I'm not using the computer. From what I can tell, I can't use Sleep since that'll disable network access (whole reason why I leave the computer on in the first place). I already turn off my monitor when it's not in use. I enabled Intel SpeedStep within the BIOS (I know, I have a 965 and why am I enabling SpeedStep?) Should I bring the graphics card back to stock speeds and lower the clock on the processor even more? Main reason why I'm asking is I think this computer alone is the reason why my power bill is high, so I want to reduce its consumption to as low as possible without having to shut the thing down.

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Stop the constant random reboots of my GIGABYTE GA-B75M-D3V

    - by Frederic
    I've got some issues with a new system. It's rebooting constantly. The system consists of a: brand new: Gigabyte GA-B75M-D3V with F9 BIOS (latest) Intel Core i5-3470 Ivy Bridge 2x 8GB G.SKILL Ripjaws 1600MHz memory (mem-tested x-86) coming from a stable system: Creative Soundcard X-FI Titanium Asus Radeon HD4850 OCZ Vertex 3 120G SSD Sata 3 Hard disk 1TB Sata 2 ASUS Blu-ray Drive PSU 400w Connected peripherals : Toshiba tv (displayport on dvi of MB or HD4850) Wired mouse, wireless keyboard (logitech) Bluetooth usb key Azio main problem : it's not possible to read the errors from the MB. nothing on the manual neither on internet. At the beginning, I received a MB with graphic problems and the problem of rebooting. I RMA'd it. The new one doesn't have any graphic problems. but it's still constantly rebooting. I removed everything except the HD, the sound-card, the blu-ray drive and the wireless keyboard. It's still unexpectdly rebooting. I'm running a test with just the motherboard and the HD. I will update this text after the test. I've got some questions : Somebody have an idea of a test? The PSU could cause that problem? I used it a lot of years with the stable system. Update 1: BTW, if anyone has the same problem, the manual won't say it but you'll need to reset the bios between two tests (the screwdriver on the two pins) if you suspect a problem of compatibility .

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • My SSD stopped working as it ends up in blue death right after windows 7 launches, how can I reset it for new windows install?

    - by HattoriHanzo
    As I mentioned in the title my SSD suddenly started to fail as after launching Windows 7 it goes straight to completely idle and after a few minutes it goes to blue death and restarts. I have an another HD with a windows xp on it and it works fine and I can also see the SSD and can access everything on it. Windows 7 on the SSD does work in safe mode though yet I didn't manage to find out what causes the problem. Since I can sill access the files and save them to another HD I'm looking for the best way to wipe and reinstall Windows 7. I have yet failed to find an easy to follow (or even understand) guide, different sources on the internet recommend different ways of doing this. And some guides are just simply full of terms I have no clue about. It's an OCZ Vertex 2 120GB. I"d very much appreciate if someone could give me an advice on what I should do, preferably in a way so I could regain the best possible performance as well. it doesn't matter if I don't understand the science behind it as long as I can follow the steps. Thanks!

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

  • Does having TRIM enabled affect other hard drives on a computer (and how do you know when Windows is using it)?

    - by Breakthrough
    I recently purchased a solid state drive (an OCZ Vertex 2 (80 GB)) to use as my primary operating system partition. I also have three other SATA hard drives of assorted sizes. I successfully installed Windows 7 Professional onto the SSD (works awesome, great response time and transfer rate), and used the other three HDDs for data storage. I was browsing through the Bible of OCZ SSDs, and noticed the following in Section 60-76 - Tweaks and TRIM: Q. How do I know if TRIM is enabled on my OCZ SSD? A. In Windows 7, go to start/run/cmd), type the following: fsutil.exe behaviour query DisableDeleteNotify It should respond back with: DisableDeleteNotify=0 if TRIM support is ready and active. If it's not, then type: fsutil.exe behavior set DisableDeleteNotify 0 After a bit of searching on Google, I found similar results elsewhere (set DisableDeleteNotify to 0, which makes sense since for TRIM to work, the solid-state drive needs to be notified when deletes occur (for the garbage collector) unlike a normal hard drive). When I run the query on fsutil, I get the following result: DisableDeleteNotify = 48 Following the instructions I found, I set this to 0 instead of 48. However, I am beginning to wonder. Is this all the proof I really need that the OS is using TRIM? Also, since this applies globally for the computer, is TRIM data being sent to the other hard drives connected to the computer? And if so, would this cause any degradation in disk performance?

    Read the article

  • How to prevent an SSD from disappearing from BIOS

    - by Midimatt
    I've only recently upgraded my old machine to a new one with a brand new 60gb SSD as my boot drive and a 1TB main drive. Paranoid about completely breaking my SSD, I read up on a lot of issues that I needed to watch out for, including making sure AHCI was turned on and trim enabled. PC has been working fine for a few weeks now, until today. My wife was watching some TV on the machine when it started to act strange and eventually blue screened. She rebooted and the boot mgr was missing. When I got home from work I checked the BIOS and the drive had disappeared. I panicked and looked up some possible fixes, and I discovered a large amount of people having problems with the drive firmware, especially on OCZ Vertex and Agility drives, and my drive is an Agility 3 drive. The problems included blue screens followed by missing drives, and a solution was to reset the CMOS and try again. This worked, and now everything seems to be working fine. My question is, is there any way to prevent this from happening? Am I missing a setting for my SSD? All of the posts I found were from early to mid-2011 nothing for the end of 2011 to 2012. So I am wondering if I've missed anything. EDIT: Checked my drives firmware and it is 2.15, which has had issues reported by users.

    Read the article

  • Using glDrawElements does not draw my .obj file

    - by Hallik
    I am trying to correctly import an .OBJ file from 3ds Max. I got this working using glBegin() & glEnd() from a previous question on here, but had really poor performance obviously, so I am trying to use glDrawElements now. I am importing a chessboard, its game pieces, etc. The board, each game piece, and each square on the board is stored in a struct GroupObject. The way I store the data is like this: struct Vertex { float position[3]; float texCoord[2]; float normal[3]; float tangent[4]; float bitangent[3]; }; struct Material { float ambient[4]; float diffuse[4]; float specular[4]; float shininess; // [0 = min shininess, 1 = max shininess] float alpha; // [0 = fully transparent, 1 = fully opaque] std::string name; std::string colorMapFilename; std::string bumpMapFilename; std::vector<int> indices; int id; }; //A chess piece or square struct GroupObject { std::vector<Material *> materials; std::string objectName; std::string groupName; int index; }; All vertices are triangles, so there are always 3 points. When I am looping through the faces f section in the obj file, I store the v0, v1, & v2 in the Material-indices. (I am doing v[0-2] - 1 to account for obj files being 1-based and my vectors being 0-based. So when I get to the render method, I am trying to loop through every object, which loops through every material attached to that object. I set the material information and try and use glDrawElements. However, the screen is black. I was able to draw the model just fine when I looped through each distinct material with all the indices associated with that material, and it drew the model fine. This time around, so I can use the stencil buffer for selecting GroupObjects, I changed up the loop, but the screen is black. Here is my render loop. The only thing I changed was the for loop(s) so they go through each object, and each material in the object in turn. void GLEngine::drawModel() { glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Vertex arrays setup glEnableClientState( GL_VERTEX_ARRAY ); glVertexPointer(3, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->position); glEnableClientState( GL_NORMAL_ARRAY ); glNormalPointer(GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->normal); glClientActiveTexture( GL_TEXTURE0 ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glTexCoordPointer(2, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->texCoord); glUseProgram(blinnPhongShader); objects = model.getObjects(); // Loop through objects... for( int i=0 ; i < objects.size(); i++ ) { ModelOBJ::GroupObject *object = objects[i]; // Loop through materials used by object... for( int j=0 ; j<object->materials.size() ; j++ ) { ModelOBJ::Material *pMaterial = object->materials[j]; glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, pMaterial->ambient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, pMaterial->diffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, pMaterial->specular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, pMaterial->shininess * 128.0f); // Draw faces, letting OpenGL loop through them glDrawElements( GL_TRIANGLES, pMaterial->indices.size(), GL_UNSIGNED_INT, &pMaterial->indices ); } } if (model.hasNormals()) glDisableClientState(GL_NORMAL_ARRAY); if (model.hasTextureCoords()) { glClientActiveTexture(GL_TEXTURE0); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } if (model.hasPositions()) glDisableClientState(GL_VERTEX_ARRAY); glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); glDisable(GL_BLEND); } I don't know what I am missing that's important. If it's also helpful, here is where I read a 'f' face line and store the info in the obj importer in the pMaterial-indices. else if (sscanf(buffer, "%d/%d/%d", &v[0], &vt[0], &vn[0]) == 3) // v/vt/vn { fscanf(pFile, "%d/%d/%d", &v[1], &vt[1], &vn[1]); fscanf(pFile, "%d/%d/%d", &v[2], &vt[2], &vn[2]); v[0] = (v[0] < 0) ? v[0] + numVertices - 1 : v[0] - 1; v[1] = (v[1] < 0) ? v[1] + numVertices - 1 : v[1] - 1; v[2] = (v[2] < 0) ? v[2] + numVertices - 1 : v[2] - 1; currentMaterial->indices.push_back(v[0]); currentMaterial->indices.push_back(v[1]); currentMaterial->indices.push_back(v[2]); Again, this worked drawing it all together only separated by materials, so I haven't changed code anywhere else except added the indices to the materials within objects, and the loop in the draw method. Before everything was showing up black, now with the setup as above, I am getting an unhandled exception write violation on the glDrawElements line. I did a breakpoint there, and there are over 600 elements in the pMaterial-indices array, so it's not empty, it has indices to use. When I set the glDrawElements like this, it gives me the black screen but no errors glDrawElements( GL_TRIANGLES, pMaterial->indices.size(), GL_UNSIGNED_INT, &pMaterial->indices[0] ); I have also tried adding this when I loop through the faces on import if ( currentMaterial->startIndex == -1 ) currentMaterial->startIndex = v[0]; currentMaterial->triangleCount++; And when drawing... //in draw method glDrawElements( GL_TRIANGLES, pMaterial->triangleCount * 3, GL_UNSIGNED_INT, model.getIndexBuffer() + pMaterial->startIndex );

    Read the article

  • How to update a uniform variable in GLSL

    - by paj777
    Hi All, I am trying to get update the eye position in my shader from my appliaction but I keep getting error 1281 when I attempt this. I have no problems after the initialization just when i subsequently try to update the values. Here is my code: void GraphicsObject::SendShadersDDS(char vertFile [], char fragFile [], char filename []) { char *vs = NULL,*fs = NULL; vert = glCreateShader(GL_VERTEX_SHADER); frag = glCreateShader(GL_FRAGMENT_SHADER); vs = textFileRead(vertFile); fs = textFileRead(fragFile); const char * ff = fs; const char * vv = vs; glShaderSource(vert, 1, &vv, NULL); glShaderSource(frag, 1, &ff, NULL); free(vs); free(fs); glCompileShader(vert); glCompileShader(frag); program = glCreateProgram(); glAttachShader(program, frag); glAttachShader(program, vert); glLinkProgram(program); glUseProgram(program); LoadCubeTexture(filename, compressedTexture); GLint location = glGetUniformLocation(program, "tex"); glUniform1i(location, 0); glActiveTexture(GL_TEXTURE0); EyePos = glGetUniformLocation(program, "EyePosition"); glUniform4f(EyePos, EyePosition.X(),EyePosition.Y(), EyePosition.Z(), 1.0); DWORD bob = glGetError(); //All is fine here glEnable(GL_DEPTH_TEST); } And here's the function I call to update the eye position: void GraphicsObject::UpdateEyePosition(Vector3d& eyePosition){ glUniform4f(EyePos, eyePosition.X(),eyePosition.Y(), eyePosition.Z(), 1.0); DWORD bob = glGetError(); //bob equals 1281 after this call } I've tried a few ways now of updating the variable and this is the latest incarnation, thanks for viewing, all comments welcome.

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code (PART II)

    - by Pitelk
    Hi all , i cannot understand a certain part of the paper published by Donald Johnson about finding cycles (Circuits) in a graph. More specific i cannot understand what is the matrix Ak which is mentioned in the following line of the pseudo code : Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; to make things worse some lines after is mentins " for i in Vk do " without declaring what the Vk is... As far i have understand we have the following: 1) in general, a strong component is a sub-graph of a graph, in which for every node of this sub-graph there is a path to any node of the sub-graph (in other words you can access any node of the sub-graph from any other node of the sub-graph) 2) a sub-graph induced by a list of nodes is a graph containing all these nodes plus all the edges connecting these nodes. in paper the mathematical definition is " F is a subgraph of G induced by W if W is subset of V and F = (W,{u,y)|u,y in W and (u,y) in E)}) where u,y are edges , E is the set of all the edges in the graph, W is a set of nodes. 3)in the code implementation the nodes are named by integer numbers 1 ... n. 4) I suspect that the Vk is the set of nodes of the strong component K. now to the question. Lets say we have a graph G= (V,E) with V = {1,2,3,4,5,6,7,8,9} which it can be divided into 3 strong components the SC1 = {1,4,7,8} SC2= {2,3,9} SC3 = {5,6} (and their edges) Can anybody give me an example for s =1, s= 2, s= 5 what if going to be the Vk and Ak according to the code? The pseudo code is in my previous question in http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code and the paper can be found at http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code thank you in advance

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code

    - by Pitelk
    Hi , does anyone know the Donald B. Johnson's algorithm which enumarates all the elementary circuits (cycles) in a Directed graph? link text I have the paper he had published in 1975 but I cannot understand the pseudo-code. My goal is to implement this algorithm in java. Some questions i have is for example what is the matrix Ak it refers to. In the pseudo code mentions that Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; Does that mean i have to implement another algorithm that finds the Ak matrix? Another question is what the following means? begin logical f; Does also the line "logical procedure CIRCUIT (integer value v);" means that the circuit procedure returns a logical variable. In the pseudo code also has the line "CIRCUIT := f;" . Does this mean? It would be great if someone could translate this 1970's pseudocode to a more modern type of pseudo code so i can understand it in case you are interested to help but you cannot find the paper please email me at [email protected] and i will send you the paper. Thanks in advance

    Read the article

  • C++ design question on traversing binary trees

    - by user231536
    I have a binary tree T which I would like to copy to another tree. Suppose I have a visit method that gets evaluated at every node: struct visit { virtual void operator() (node* n)=0; }; and I have a visitor algorithm void visitor(node* t, visit& v) { //do a preorder traversal using stack or recursion if (!t) return; v(t); visitor(t->left, v); visitor(t->right, v); } I have 2 questions: I settled on using the functor based approach because I see that boost graph does this (vertex visitors). Also I tend to repeat the same code to traverse the tree and do different things at each node. Is this a good design to get rid of duplicated code? What other alternative designs are there? How do I use this to create a new binary tree from an existing one? I can keep a stack on the visit functor if I want, but it gets tied to the algorithm in visitor. How would I incorporate postorder traversals here ? Another functor class?

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • Opengl-es draw an .obj file, but how?

    - by lacas
    I d like to parse an .obj file. My parser is working good, but my displaying is not good. Obj file is here my code is: public ObjModelParser parse() { long startTime = System.currentTimeMillis(); InputStream fileIn = resources.openRawResource(resourceID); BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); String line=""; Log.e("model loader", "Start parsing object " + resourceID); try { while ((line = buffer.readLine()) != null) { StringTokenizer parts = new StringTokenizer(line, " "); int numTokens = parts.countTokens(); if (numTokens == 0) continue; String part = parts.nextToken(); if (part.equals(VERTEX)) { Log.e("v ", line); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); .... and my displaying code is: draw that model with TRIANGLE_STRIP and gl.glDrawArrays(rendermode, 0, coords.length/dimension); What is the mistake here? edited: file here to show what is my good coords from my program for a cube, and what is from .obj file, that never show Thanks, Leslie

    Read the article

  • How to create a Binary Tree from a General Tree?

    - by mno4k
    I have to solve the following constructor for a BinaryTree class in java: BinaryTree(GeneralTree<T> aTree) This method should create a BinaryTree (bt) from a General Tree (gt) as follows: Every Vertex from gt will be represented as a leaf in bt. If gt is a leaf, then bt will be a leaf with the same value as gt If gt is not a leaf, then bt will be constructed as an empty root, a left subTree (lt) and a right subTree (lr). Lt is a stric binary tree created from the oldest subtree of gt (the left-most subtree) and lr is a stric binary tree created from gt without its left-most subtree. The frist part is trivial enough, but the second one is giving me some trouble. I've gotten this far: public BinaryTree(GeneralTree<T> aTree){ if (aTree.isLeaf()){ root= new BinaryNode<T>(aTree.getRootData()); }else{ root= new BinaryNode<T>(null); // empty root LinkedList<GeneralTree<T>> childs = aTree.getChilds(); // Childs of the GT are implemented as a LinkedList of SubTrees child.begin(); //start iteration trough list BinaryTree<T> lt = new BinaryTree<T>(childs.element(0)); // first element = left-most child this.addLeftChild(lt); aTree.DeleteChild(hijos.elemento(0)); BinaryTree<T> lr = new BinaryTree<T>(aTree); this.addRightChild(lr); } } Is this the right way? If not, can you think of a better way to solve this? Thank you!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51  | Next Page >