Search Results

Search found 3620 results on 145 pages for 'dto mapping'.

Page 18/145 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • PCF shadow shader math causing artifacts

    - by user2971069
    For a while now I used PCSS for my shadow technique of choice until I discovered a type of percentage closer filtering. This method creates really smooth shadows and with hopes of improving performance, with only a fraction of texture samples, I tried to implement PCF into my shader. This is the relevant code: float c0, c1, c2, c3; float f = blurFactor; float2 coord = ProjectedTexCoords; if (receiverDistance - tex2D(lightSampler, coord + float2(0, 0)).x > 0.0007) c0 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, 0)).x > 0.0007) c1 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(0, f)).x > 0.0007) c2 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, f)).x > 0.0007) c3 = 1; coord = (coord % f) / f; return 1 - (c0 * (1 - coord.x) * (1 - coord.y) + c1 * coord.x * (1 - coord.y) + c2 * (1 - coord.x) * coord.y + c3 * coord.x * coord.y); This is a very basic implementation. blurFactor is initialized with 1 / LightTextureSize. So the if statements fetch the occlusion values for the four adjacent texels. I now want to weight each value based on the actual position of the texture coordinate. If it's near the bottom-right pixel, that occlusion value should be preferred. The weighting itself is done with a simple bilinear interpolation function, however this function takes a 2d vector in the range [0..1] so I have to convert my texture coordinate to get the distance from my first pixel to the second one in range [0..1]. For that I used the mod operator to get it into [0..f] range and then divided by f. This code makes sense to me, and for specific blurFactors it works, producing really smooth one pixel wide shadows, but not for all blurFactors. Initially blurFactor is (1 / LightTextureSize) to sample the 4 adjacent texels. I now want to increase the blurFactor by factor x to get a smooth interpolation across maybe 4 or so pixels. But that is when weird artifacts show up. Here is an image: Using a 1x on blurFactor produces a good result, 0.5 is as expected not so smooth. 2x however doesn't work at all. I found that only a factor of 1/2^n produces an good result, every other factor produces artifacts. I'm pretty sure the error lies here: coord = (coord % f) / f; Maybe the modulo is not calculated correctly? I have no idea how to fix that. Is it even possible for pixel that are further than 1 pixel away?

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • samplerCubeShadow and texture offset

    - by Irbis
    I use sampler2DShadow when accessing a single shadow map. I create PCF in this way: result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(-1,-1)); result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(-1,1)); result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(1,1)); result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(1,-1)); result = result * 0.25; For a cube map I use samplerCubeShadow: result = texture(ShadowCubeSampler, vec4(normalize(position), depth)); How to adopt above PCF when accessing a cube map ?

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • OpenGL lighting with dynamic geometry

    - by Tank
    I'm currently thinking hard about how to implement lighting in my game. The geometry is quite dynamic (fixed 3D grid with custom geometry in each cell) and needs some light to get more depth and in general look nicer. A scene in my game always contains sunlight and local light sources like lamps (point lights). One can move underground, so sunlight must be able to illuminate as far as it can get. Here's a render of a typical situation: The lamp is positioned behind the wall to the top, and in the hollow cube there's a hole in the back, so that light can shine through. (I don't want soft shadows, this is just for illustration) While spending the whole day searching through Google, I stumbled on some keywords like deferred rendering, forward rendering, ambient occlusion, screen space ambient occlusion etc. Some articles/tutorials even refer to "normal shading", but to be honest I don't really have an idea to even do simple shading. OpenGL of course has a fixed lighting pipeline with 8 possible light sources. However they just illuminate all vertices without checking for occluding geometry. I'd be very thankful if someone could give me some pointers into the right direction. I don't need complete solutions or similar, just good sources with information understandable for someone with nearly no lighting experience (preferably with OpenGL).

    Read the article

  • Exporting UV coords from Blender

    - by Soapy
    So I have searched on google and various other websites but I've not found an answer. The only ones I did find did not work. So my question is how do I get UV coords from blender (2.63)? Currently I'm writing my own custom file exporter, and so far have managed to export vertices and their normals. Is there a way to export the UV coords? N.B. I'm currently try to figure it out using a simple cube that is unwrapped and has a texture applied to it.

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • Bad texture on model with different GPU

    - by Pacha
    I have some kind of distortion on the texture of my 3D model. It works perfectly well on an AMD GPU, but when testing on a integrated Intel HD graphics card it has a weird issue. I don't have a problem with the rest of my entities as they are not scaled. The models with the problems are scaled, as my engine supports different sizes for the platforms. I am using Ogre3D as rendering engine, and GLSL as shader language. Vertex shader: #version 120 varying vec2 UV; void main() { UV = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment shader: #version 120 varying vec2 UV; uniform sampler2D diffuseMap; void main(void) { gl_FragColor = texture(diffuseMap, UV); } Screenshot (the error is on the right and left side, the top and bottom part are rendered perfectly well):

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • HTTP error code 405: tomcat Url mapping issue

    - by Andrew
    I am having trouble POSTing to my java HTTPServlet. I am getting "HTTP Status 405 - HTTP method GET is not supported by this URL" from my tomcat server". When I debug the servlet the login method is never called. I think it's a url mapping issue within tomcat... web.xml <servlet-mapping> <servlet-name>faxcom</servlet-name> <url-pattern>/faxcom/*</url-pattern> </servlet-mapping> FaxcomService.java @Path("/rest") public class FaxcomService extends HttpServlet{ private FAXCOM_x0020_ServiceLocator service; private FAXCOM_x0020_ServiceSoap port; @GET @Produces("application/json") public String testGet() { return "{ \"got here\":true }"; } @POST @Path("/login") @Consumes("application/json") // @Produces("application/json") public Response login(LoginBean login) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(10); try { service = new FAXCOM_x0020_ServiceLocator(); service.setFAXCOM_x0020_ServiceSoapEndpointAddress("http://cd-faxserver/faxcom_ws/faxcomservice.asmx"); service.setMaintainSession(true); // enable sessions support port = service.getFAXCOM_x0020_ServiceSoap(); rm.add(new ResultMessageBean(port.logOn( "\\\\CD-Faxserver\\FaxcomQ_API", /* path to the queue */ login.getUserName(), /* username */ login.getPassword(), /* password */ login.getUserType() /* 2 = user conf user */ ))); } catch (RemoteException e) { e.printStackTrace(); } catch (ServiceException e) { e.printStackTrace(); } // return rm; return Response.status(201).entity(rm).build(); } @POST @Path("/newFaxMessage") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public ArrayList<ResultMessageBean> newFaxMessage(FaxBean fax) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(); try { rm.add(new ResultMessageBean(port.newFaxMessage( fax.getPriority(), /* priority: 0 - low, 1 - normal, 2 - high, 3 - urgent */ fax.getSendTime(), /* send time */ /* "0.0" - immediate */ /* "1.0" - offpeak */ /* "9/14/2007 5:12:11 PM" - to set specific time */ fax.getResolution(), /* resolution: 0 - low res, 1 - high res */ fax.getSubject(), /* subject */ fax.getCoverpage(), /* cover page: "" – default, “(none)� – no cover page */ fax.getMemo(), /* memo */ fax.getSenderName(), /* sender's name */ fax.getSenderFaxNumber(), /* sender's fax */ fax.getRecipients().get(0).getName(), /* recipient's name */ fax.getRecipients().get(0).getCompany(), /* recipient's company */ fax.getRecipients().get(0).getFaxNumber(), /* destination fax number */ fax.getRecipients().get(0).getVoiceNumber(), /* recipient's phone number */ fax.getRecipients().get(0).getAccountNumber() /* recipient's account number */ ))); if (fax.getRecipients().size() > 1) { for (int i = 1; i < fax.getRecipients().size(); i++) rm.addAll(addRecipient(fax.getRecipients().get(i))); } } catch (RemoteException e) { // TODO Auto-generated catch block e.printStackTrace(); } return rm; } } Main.java private static void main(String[] args) { try { URL url = new URL("https://andrew-vm/faxcom/rest/login"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); conn.setRequestProperty("Content-Type", "application/json"); FileInputStream jsonDemo = new FileInputStream("login.txt"); OutputStream os = (OutputStream) conn.getOutputStream(); os.write(IOUtils.toByteArray(jsonDemo)); os.flush(); if (conn.getResponseCode() != 200) { throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode()); } BufferedReader br = new BufferedReader(new InputStreamReader( (conn.getInputStream()))); String output; System.out.println("Output from Server .... \n"); while ((output = br.readLine()) != null) { System.out.println(output); } // Don't want to disconnect - servletInstance will be destroyed // conn.disconnect(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } I am working from this tutorial: http://www.mkyong.com/webservices/jax-rs/restfull-java-client-with-java-net-url/

    Read the article

  • Should business objects be able to create their own DTOs?

    - by Sam
    Suppose I have the following class: class Camera { public Camera( double exposure, double brightness, double contrast, RegionOfInterest regionOfInterest) { this.exposure = exposure; this.brightness = brightness; this.contrast = contrast; this.regionOfInterest = regionOfInterest; } public void ConfigureAcquisitionFifo(IAcquisitionFifo acquisitionFifo) { // do stuff to the acquisition FIFO } readonly double exposure; readonly double brightness; readonly double contrast; readonly RegionOfInterest regionOfInterest; } ... and a DTO to transport the camera info across a service boundary (WCF), say, for viewing in a WinForms/WPF/Web app: using System.Runtime.Serialization; [DataContract] public class CameraData { [DataMember] public double Exposure { get; set; } [DataMember] public double Brightness { get; set; } [DataMember] public double Contrast { get; set; } [DataMember] public RegionOfInterestData RegionOfInterest { get; set; } } Now I can add a method to Camera to expose its data: class Camera { // blah blah public CameraData ToData() { var regionOfInterestData = regionOfInterest.ToData(); return new CameraData() { Exposure = exposure, Brightness = brightness, Contrast = contrast, RegionOfInterestData = regionOfInterestData }; } } or, I can create a method that requires a special IReporter to be passed in for the Camera to expose its data to. This removes the dependency on the Contracts layer (Camera no longer has to know about CameraData): class Camera { // beep beep I'm a jeep public void ExposeToReporter(IReporter reporter) { reporter.GetCameraInfo(exposure, brightness, contrast, regionOfInterest); } } So which should I do? I prefer the second, but it requires the IReporter to have a CameraData field (which gets changed by GetCameraInfo()), which feels weird. Also, if there is any even better solution, please share with me! I'm still an object-oriented newb.

    Read the article

  • Fluent NHibernate Many to one mapping

    - by Jit
    I am creating a NHibenate application with one to many relationship. Like City and State data. City table CREATE TABLE [dbo].[State]( [StateId] [varchar](2) NOT NULL primary key, [StateName] [varchar](20) NULL) CREATE TABLE [dbo].[City]( [Id] [int] primary key IDENTITY(1,1) NOT NULL , [State_id] [varchar](2) NULL refrences State(StateId), [CityName] [varchar](50) NULL) My mapping is follows public CityMapping() { Id(x = x.Id); Map(x = x.State_id); Map(x = x.CityName); HasMany(x = x.EmployeePreferedLocations) .Inverse() .Cascade.SaveUpdate() ; References(x = x.State) //.Cascade.All(); //.Class(typeof(State)) //.Not.Nullable() .Cascade.None() .Column("State_id") ; } public StateMapping() { Id(x => x.StateId) .GeneratedBy.Assigned(); Map(x => x.StateName); HasMany(x => x.Jobs) .Inverse(); //.Cascade.SaveUpdate(); HasMany(x => x.EmployeePreferedLocations) .Inverse(); HasMany(x => x.Cities) // .Inverse() .Cascade.SaveUpdate() //.Not.LazyLoad() ; } Models are as follows: [Serializable] public partial class City { public virtual System.String CityName { get; set; } public virtual System.Int32 Id { get; set; } public virtual System.String State_id { get; set; } public virtual IList<EmployeePreferedLocation> EmployeePreferedLocations { get; set; } public virtual JobPortal.Data.Domain.Model.State State { get; set; } public City(){} } public partial class State { public virtual System.String StateId { get; set; } public virtual System.String StateName { get; set; } public virtual IList<City> Cities { get; set; } public virtual IList<EmployeePreferedLocation> EmployeePreferedLocations { get; set; } public virtual IList<Job> Jobs { get; set; } public State() { Cities = new List<City>(); EmployeePreferedLocations = new List<EmployeePreferedLocation>(); Jobs = new List<Job>(); } //public virtual void AddCity(City city) //{ // city.State = this; // Cities.Add(city); //} } My Unit Testing code is below. City city = new City(); IRepository<State> rState = new Repository<State>(); Dictionary<string, string> critetia = new Dictionary<string, string>(); critetia.Add("StateId", "TX"); State frState = rState.GetByCriteria(critetia); city.CityName = "Waco"; city.State = frState; IRepository<City> rCity = new Repository<City>(); rCity.SaveOrUpdate(city); City frCity = rCity.GetById(city.Id); The problem is , I am not able to insert record. The error is below. "Invalid index 2 for this SqlParameterCollection with Count=2." But the error will not come if I comment State_id mapping field in the CityMapping file. I donot know what mistake is I did. If do not give the mapping Map(x = x.State_id); the value of this field is null, which is desired. Please help me how to solve this issue.

    Read the article

  • Is there a way to transfrom a list of key/value pairs into a data transfer object

    - by weevie
    ...apart from the obvious looping through the list and a dirty great case statement! I've turned over a few Linq queries in my head but nothing seems to get anywhere close. Here's the an example DTO if it helps: class ClientCompany { public string Title { get; private set; } public string Forenames { get; private set; } public string Surname { get; private set; } public string EmailAddress { get; private set; } public string TelephoneNumber { get; private set; } public string AlternativeTelephoneNumber { get; private set; } public string Address1 { get; private set; } public string Address2 { get; private set; } public string TownOrDistrict { get; private set; } public string CountyOrState { get; private set; } public string PostCode { get; private set; } } We have no control over the fact that we're getting the data in as KV pairs, I'm afraid.

    Read the article

  • iPad Safari's mapping of mouse events to touch events in image-maps

    - by Tim
    My website makes extensive use of image-maps. The images are of pages from a medieval manuscript. The mouseOver event of the AREA tags has a tooltip attached to it, which displays a modern typographic transcription of the ancient script for the line the mouse is hovering over. I just checked my website out on the iPad at the Apple store. The iPad is many respects a joy to use, however, I am wondering about Apple's mapping of the mouseEvents to the finger-touch events. Apple probably had a good reason for doing things as they did, but their choices seem counterintuitive an overly complicated to me. Specifically, the iPad Safari browser clearly was responding to both fingerDown and fingerTap, and in different ways. When I tapped an area of the image-map, the tooltip wired to the mouse-over event pf the AREA tag was displayed, and remained visible until I tapped somewhere else. When I held my finger down on an area of the image-map, the area changed color. So if iPad Safari detects a mouseOver eventhandler, it executes the mouseOver code and apparently prevents the "click" event from propagating, so that if you also have something wired to the click event, it doesn't work? Is that right? But more importantly, why isn't fingerDown the iPad-Safari counterpart for mouseOver? FingerDown seems a more likely candidate than Tap when mapping the mousePOver event. I would have expected things to be mapped in this way: MouseClick : FingerTap (i.e. finger down and then immediately up) MouseOver : FingerDown (finger down and stays on the spot) If Apple had treated fingerDown as the counterpart to mouseOver, then the tooltip could be displayed upon FingerDown and made invisible again on fingerUp, which would be the counterpart to mouseOut. Perhaps someone could enlighten me about the thinking process that led Apple to these particular mouse-to-touch event-mappings? Thanks

    Read the article

  • IIS 6 with wildcard mapping and UNC virtual directory problem

    - by El Che
    Hi. On our production servers (win 2003 with IIS6 and load balanced with an F5 BIGIP), we have a problem when introducing wildcardmapping on IIS6. We use .net Framework 3.5 SP1. The issue manifests itself as by the server only sometimes serving the images stored on a virtual directory pointing to a UNC path. Sometimes the images are displayed, and sometimes not. Removing the wildcard mapping solved this problem. I will need wildcard mapping on the server for future features, so any help/pointers to if this is a known problem will be very helpful. In advance, thanks for any help. Edit: The exception it fails with is the following: Message: Failed to start monitoring changes to '\ourFileServer\folder1\thumbnails' because the network BIOS command limit has been reached. For more information on this error, please refer to Microsoft knowledge base article 810886. Hosting on a UNC share is not supported for the Windows XP Platform. Source: System.Web Data: System.Collections.ListDictionaryInternal TargetSizeVoid .ctor(System.Web.DirectoryMonitor, System.String, Boolean, UInt32) StackTrace at System.Web.DirMonCompletion..ctor(DirectoryMonitor dirMon, String dir, Boolean watchSubtree, UInt32 notifyFilter) at System.Web.DirectoryMonitor.StartMonitoring() at System.Web.DirectoryMonitor.StartMonitoringFile(String file, FileChangeEventHandler callback, String alias) at System.Web.FileChangesMonitor.StartMonitoringFile(String alias, FileChangeEventHandler callback) at System.Web.Configuration.WebConfigurationHost.StartMonitoringStreamForChanges(String streamName, StreamChangeCallback callback) at System.Configuration.BaseConfigurationRecord.MonitorStream(String configKey, String configSource, String streamname) at System.Configuration.BaseConfigurationRecord.InitConfigFromFile()

    Read the article

  • Inheritance mapping with Fluent NHibernate

    - by Berryl
    Below is an example of how I currently use automapping overrides to set up a my db representation of inheritance. It gets the job done functionality wise BUT by using some internal default values. For example, the discriminator column name winds up being the literal value 'discriminator' instead of "ActivityType, and the discriminator values are the fully qualified type of each class, instead of "ACCOUNT" and "PROJECT". I am guessing that this is a bug that doesn't get much attention now that conventions are preferred, and that the convention approach works correctly. I am looking for a sample of usage. Cheers, Berryl public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { ... mapping.DiscriminateSubClassesOnColumn("ActivityType"); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public AccountingActivityMap() { ... DiscriminatorValue("ACCOUNT"); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public ProjectActivityMap() { ... DiscriminatorValue("PROJECT"); } }

    Read the article

  • OpenGl texture mapping blocking colours on FreeType?

    - by Dororo
    I'm using FreeType in order to allow fonts to be used in OpenGL. However, I'm having a problem where I cannot change the font colour whenever I do texture mapping. No matter what I select using glColor3f it will just come out white. The texture works fine. glClear(GL_COLOR_BUFFER_BIT); glLoadIdentity(); glColor3f(0.5,0.0,0.5); glPushMatrix(); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glBindTexture(GL_TEXTURE_2D, texName); glBegin(GL_POLYGON); glTexCoord2f(0,1); glVertex2f(-16,-16); glTexCoord2f(0,0); glVertex2f(-16,16); glTexCoord2f(1,0); glVertex2f(16,16); glTexCoord2f(1,1); glVertex2f(16,-16); glEnd(); glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); glPopMatrix(); glColor3f(1,0,0); print(our_font, -300+screenWidth/2.0, screenHeight/2.0, "fifty two - %7.2f", spin); This is the problem code, I can confirm that drawing a polygon beneath this code will indeed make it red. The text is not changing to red though which it should; if you remove the texture mapping above it will turn red again, I can only think it is a problem with enabling and disabling and I've forgotten to do something...?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >