Search Results

Search found 3055 results on 123 pages for 'ptr vector'.

Page 60/123 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • what's wrong with my Ubuntu 11.10 bind9 configuration?

    - by John Bowlinger
    I've followed several tutorials on installing your own nameservers and I'm pretty much at my wit's end, because I cannot get them to resolve. Note, the actual domain and ip address has been changed for privacy to example.com and 192.168.0.1. My named.conf.local file: zone "example.com" { type master; file "/var/cache/bind/example.com.db"; }; zone "0.168.192.in_addr.arpa" { type master; file "/var/cache/bind/192.168.0.db"; }; My named.conf.options file: options { forwarders { 192.168.0.1; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; My resolv.conf file: search example.com. nameserver 192.168.0.1 My Forward DNS file: ORIGIN example.com. $TTL 86400 @ IN SOA ns1.example.com. root.example.com. ( 2012083101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 3600 ) ; Negative Cache TTL example.com. NS ns1.example.com. example.com. NS ns2.example.com. example.com. MX 10 mail.example.com. @ IN A 192.168.0.1 ns1.example.com IN A 192.168.0.1 ns2.example.com IN A 192.168.0.2 mail IN A 192.168.0.1 server1 IN A 192.168.0.1 gateway IN CNAME ns1.example.com. headoffice IN CNAME server1.example.com. smtp IN CNAME mail.example.com. pop IN CNAME mail.example.com. imap IN CNAME mail.example.com. www IN CNAME server1.example.com. sql IN CNAME server1.example.com. And my reverse DNS: $ORIGIN 0.168.192.in-addr.arpa. $TTL 86400 @ IN SOA ns1.example.com. root.example.com. ( 2009013101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 3600 ) ; Negative Cache TTL 1 PTR mail.example.com. 1 PTR server1.example.com. 2 PTR ns1.example.com. Yet, when I restart bind9 and do: host ns1.example.com localhost I get: Using domain server: Name: localhost Address: 127.0.0.1#53 Aliases: Host ns1.example.com.example.com not found: 2(SERVFAIL) Similarly, for: host 192.168.0.1 localhost I get: ;; connection timed out; no servers could be reached Anybody know what's going on? Btw, my domain name "www.example.com" that I've used in this question is being forwarded to my ISP's nameservers. Would that affect my bind9 configuration? I want to learn how to do set up nameservers on my own for learning, so that is why I'm going through all this trouble.

    Read the article

  • Inspire Geek Love with These Hilarious Geek Valentines

    - by Eric Z Goodnight
    Want to send some Geek Love to that special someone? Why not do it with these elementary school throwback valentines, and win their heart this upcoming Valentine’s day—the geek way! Read on to see the simple method to make your own custom Valentines, as well as download a set of eleven ready-made ones any geek guy or gal should be delighted get. It’s amore! How to Make Custom Valentines A size we’ve used for all of our Valentines is a 3” x 4” at 150 dpi. This is fairly low resolution for print, but makes a great graphic to email. With your new image open, Navigate to Edit > Fill and fill your background layer with a rich, red color (or whatever appeals to you.) By setting “Use” to “Foreground color as shown above, you’ll paint whatever foreground color you have in your color picker. Press to select the text tool. Set a few text objects, using whatever fonts appeal to you. Pixel fonts, like this one, are freely downloadable, and we’ve already shared a great list of Valentines fonts. Copy an image from the internet if you’re confident your sweetie won’t mind a bit of fair use of copyrighted imagery. If they do mind, find yourself some great Creative Commons images. to do a free transform on your image, sizing it to whatever dimensions work best for your design. Right click your newly added image layer in your panel and Choose “Blending Effects” to pick a Layer Style. “Stroke” with this setting adds a black line around your image. Also turning on “Outer Glow” with this setting puts a dark black shadow around the top and bottom (and sides, although they are hidden). Add some more text. Double entendre is recommended. Click and hold down on the “Rectangle Tool” to get the “Custom Shape Tool.” The custom shape tool has useful vector shapes built into it. Find the “Shape” dropdown in the menu to find the heart image. Click and drag to create a vector heart shape in your image. Your layers panel is where you can change the color, if it happens to use the wrong one at first. Click the color swatch in your panel, highlighted in blue above. will transform your vector heart. You can also use it to rotate, if you like. Add some details, like this Power or Standby symbol, which can be found in symbol fonts, taken from images online, or drawn by hand. Your Valentine is now ready to be saved as a JPG or PNG and sent to the object of your affection! Keep reading to see a list of 11 downloadable How-To Geek Valentines, including this one and the three from the header image. Download The HTG Set of Valentines Download the HTG Geek Valentines (ZIP) Download the HTG Geek Valentines (ZIP) When he’s not wooing ladies with Valentines cards, you can email the author at [email protected] with your Photoshop and Graphics questions. Your questions may be featured in a future How-To Geek article! Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • Collision detection via adjacent tiles - sprite too big

    - by BlackMamba
    I have managed to create a collision detection system for my tile-based jump'n'run game (written in C++/SFML), where I check on each update what values the surrounding tiles of the player contain and then I let the player move accordingly (i. e. move left when there is an obstacle on the right side). This works fine when the player sprite is not too big: Given a tile size of 5x5 pixels, my solution worked quite fine with a spritesize of 3x4 and 5x5 pixels. My problem is that I actually need the player to be quite gigantic (34x70 pixels given the same tilesize). When I try this, there seems to be an invisible, notably smaller boundingbox where the player collides with obstacles, the player also seems to shake strongly. Here some images to explain what I mean: Works: http://tinypic.com/r/207lvfr/8 Doesn't work: http://tinypic.com/r/2yuk02q/8 Another example of non-functioning: http://tinypic.com/r/kexbwl/8 (the player isn't falling, he stays there in the corner) My code for getting the surrounding tiles looks like this (I removed some parts to make it better readable): std::vector<std::map<std::string, int> > Game::getSurroundingTiles(sf::Vector2f position) { // converting the pixel coordinates to tilemap coordinates sf::Vector2u pPos(static_cast<int>(position.x/tileSize.x), static_cast<int>(position.y/tileSize.y)); std::vector<std::map<std::string, int> > surroundingTiles; for(int i = 0; i < 9; ++i) { // calculating the relative position of the surrounding tile(s) int c = i % 3; int r = static_cast<int>(i/3); // we subtract 1 to place the player in the middle of the 3x3 grid sf::Vector2u tilePos(pPos.x + (c - 1), pPos.y + (r - 1)); // this tells us what kind of block this tile is int tGid = levelMap[tilePos.y][tilePos.x]; // converts the coords from tile to world coords sf::Vector2u tileRect(tilePos.x*5, tilePos.y*5); // storing all the information std::map<std::string, int> tileDict; tileDict.insert(std::make_pair("gid", tGid)); tileDict.insert(std::make_pair("x", tileRect.x)); tileDict.insert(std::make_pair("y", tileRect.y)); // adding the stored information to our vector surroundingTiles.push_back(tileDict); } // I organise the map so that it is arranged like the following: /* * 4 | 1 | 5 * -- -- -- * 2 | / | 3 * -- -- -- * 6 | 0 | 7 * */ return surroundingTiles; } I then check in a loop through the surrounding tiles, if there is a 1 as gid (indicates obstacle) and then check for intersections with that adjacent tile. The problem I just can't overcome is that I think that I need to store the values of all the adjacent tiles and then check for them. How? And may there be a better solution? Any help is appreciated. P.S.: My implementation derives from this blog entry, I mostly just translated it from Objective-C/Cocos2d.

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • Is my class structure good enough?

    - by Rivten
    So I wanted to try out this challenge on reddit which is mostly about how you structure your data the best you can. I decided to challenge my C++ skills. Here's how I planned this. First, there's the Game class. It deals with time and is the only class main has access to. A game has a Forest. For now, this class does not have a lot of things, only a size and a Factory. Will be put in better use when it will come to SDL-stuff I guess A Factory is the thing that deals with the Game Objects (a.k.a. Trees, Lumberjack and Bears). It has a vector of all GameObjects and a queue of Events which will be managed at the end of one month. A GameObject is an abstract class which can be updated and which can notify the Event Listener The EventListener is a class which handles all the Events of a simulation. It can recieve events from a Game Object and notify the Factory if needed, the latter will manage correctly the event. So, the Tree, Lumberjack and Bear classes all inherits from GameObject. And Sapling and Elder Tree inherits from Tree. Finally, an Event is defined by an event_type enumeration (LUMBERJACK_MAWED, SAPPLING_EVOLUTION, ...) and an event_protagonists union (a GameObject or a pair of GameObject (who killed who ?)). I was quite happy at first with this because it seems quite logic and flexible. But I ended up questionning this structure. Here's why : I dislike the fact that a GameObject need to know about the Factory. Indeed, when a Bear moves somewhere, it needs to know if there's a Lumberjack ! Or it is the Factory which handles places and objects. It would be great if a GameObject could only interact with the EventListener... or maybe it's not that much of a big deal. Wouldn't it be better if I separate the Factory in three vectors ? One for each kind of GameObject. The idea would be to optimize research. If I'm looking do delete a dead lumberjack, I would only have to look in one shorter vector rather than a very long vector. Another problem arises when I want to know if there is any particular object in a given case because I have to look for all the gameObjects and see if they are at the given case. I would tend to think that the other idea would be to use a matrix but then the issue would be that I would have empty cases (and therefore unused space). I don't really know if Sapling and Elder Tree should inherit from Tree. Indeed, a Sapling is a Tree but what about its evolution ? Should I just delete the sapling and say to the factory to create a new Tree at the exact same place ? It doesn't seem natural to me to do so. How could I improve this ? Is the design of an Event quite good ? I've never used unions before in C++ but I didn't have any other ideas about what to use. Well, I hope I have been clear enough. Thank you for taking the time to help me !

    Read the article

  • Android Canvas Coordinate System

    - by Mitch
    I'm trying to find information on how to change the coordinate system for the canvas. I have some vector data I'd like to draw to a canvas using things like circles and lines, but the data's coordinate system doesn't match the canvas coordinate system. Is there a way to map the units I'm using to the screen's units? I'm drawing to an ImageView which isn't taking up the entire display. If I have to do my own calculations prior to each drawing call, how to I find the width and height of my ImageView? The getWidth() and getHeight() calls I tried seem to be returning the entire canvas size and not the size of the ImageView which isn't helpful. I see some matrix stuff, is that something that will work for me? I tried to use the "public void scale(float sx, float sy)", but that works more like a pixel level zoom rather than a vector scale function by expanding each pixel. This means if the dimensions are increased to fit the screen, the line thickness is also increased. Update: After some research I'm starting to think there's no way to change coordinate systems to something else. I'll need to map all my coordinates to the screen's pixel coordinates and do so by modifying each vector. The getWidth() and getHeight() seem to be working better for me now. I can say what was wrong, but I suspect I can't use these methods inside the constructor.

    Read the article

  • boost.serialization and lazy initialization

    - by niXman
    i need to serialize directory tree. i have no trouble with this type: std::map< std::string, // string(path name) std::vector<std::string> // string array(file names in the path) > tree; but for the serialization the directory tree with the content i need other type: std::map< std::string, // string(path name) std::vector< // files array std::pair< std::string, // file name std::vector< // array of file pieces std::pair< // <<<<<<<<<<<<<<<<<<<<<< for this i need lazy initialization std::string, // piece buf boost::uint32_t // crc32 summ on piece > > > > > tree; how can i serialize the object of type "std::pair" in the moment of its serialization?

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible, however, if that is the only way I would also appreciate if anyone can point me in the right direction to a regex that is unlikely to be missing anything. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • Why is the compiler not selecting my function-template overload in the following example?

    - by Steve Guidi
    Given the following function templates: #include <vector> #include <utility> struct Base { }; struct Derived : Base { }; // #1 template <typename T1, typename T2> void f(const T1& a, const T2& b) { }; // #2 template <typename T1, typename T2> void f(const std::vector<std::pair<T1, T2> >& v, Base* p) { }; Why is it that the following code always invokes overload #1 instead of overload #2? void main() { std::vector<std::pair<int, int> > v; Derived derived; f(100, 200); // clearly calls overload #1 f(v, &derived); // always calls overload #1 } Given that the second parameter of f is a derived type of Base, I was hoping that the compiler would choose overload #2 as it is a better match than the generic type in overload #1. Are there any techniques that I could use to rewrite these functions so that the user can write code as displayed in the main function (i.e., leveraging compiler-deduction of argument types)?

    Read the article

  • Changing JButton background colour temporarily?

    - by sashep
    Hi, I'm super new to Java, and in need of some help. I'm making a little java desktop application where I basically have a grid of 4 JButtons ( 2 x 2 grid), and I need the background colour of individual JButtons to change, and after one second, change back to the original colour (the game I'm trying to make is like Simon, where you have to follow a pattern of buttons that light up). I have a vector that contains randomly generated numbers in the range of 1 to 4, and I want to be able to get each element from the vector and get the corresponding button to change to a different colour for one second (for example, if the vector contains 2 4 1, I would want button 2 to change, then button 4 to change, then button 1 to change). Is this possible or is there a better way to go about doing this with something other than JButtons? How do I implement this? Also, I'm running Mac OS X, which apparently (based on some things I've read on forums) doesn't like JButtons background changing (I think it's because of system look and feel), how can I change this so it works on mac? Thank you in advance for any help :)

    Read the article

  • [ActionScript 3] Array subclasses cannot be deserialized, Error #1034

    - by aaaidan
    I've just found a strange error when deserializing from a ByteArray, where Vectors cannot contain types that extend Array: there is a TypeError when they are deserialized. TypeError: Error #1034: Type Coercion failed: cannot convert []@4b8c42e1 to com.myapp.ArraySubclass. at flash.utils::ByteArray/readObject() at com.myapp::MyApplication()[/Users/aaaidan/MyApp/com/myapp/MyApplication.as:99] Here's how: public class Application extends Sprite { public function Application() { // register the custom class registerClassAlias("MyArraySubclass", MyArraySubclass); // write a vector containing an array subclass to a byte array var vec:Vector.<MyArraySubclass> = new Vector.<MyArraySubclass>(); var arraySubclass:MyArraySubclass = new MyArraySubclass(); arraySubclass.customProperty = "foo"; vec.push(arraySubclass); var ba:ByteArray = new ByteArray(); ba.writeObject(arraySubclass); ba.position = 0; // read it back var arraySubclass2:MyArraySubclass = ba.readObject() as MyArraySubclass; // throws TypeError } } public class MyArraySubclass extends Array { public var customProperty:String = "default"; } It's a pretty specific case, but it seems very odd to me. Anyone have any ideas what's causing it, or how it could be fixed?

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find and other built-in methods?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • How to count term frequency for set of documents?

    - by ManBugra
    i have a Lucene-Index with following documents: doc1 := { caldari, jita, shield, planet } doc2 := { gallente, dodixie, armor, planet } doc3 := { amarr, laser, armor, planet } doc4 := { minmatar, rens, space } doc5 := { jove, space, secret, planet } so these 5 documents use 14 different terms: [ caldari, jita, shield, planet, gallente, dodixie, armor, amarr, laser, minmatar, rens, jove, space, secret ] the frequency of each term: [ 1, 1, 1, 4, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1 ] for easy reading: [ caldari:1, jita:1, shield:1, planet:4, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:1, rens:1, jove:1, space:2, secret:1 ] What i do want to know now is, how to obtain the term frequency vector for a set of documents? for example: Set<Documents> docs := [ doc2, doc3 ] termFrequencies = magicFunction(docs); System.out.pring( termFrequencies ); would result in the ouput: [ caldari:0, jita:0, shield:0, planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:0, rens:0, jove:0, space:0, secret:0 ] remove all zeros: [ planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1 ] Notice, that the result vetor contains only the term frequencies of the set of documents. NOT the overall frequencies of the whole index! The term 'planet' is present 4 times in the whole index but the source set of documents only contains it 2 times. A naive implementation would be to just iterate over all documents in the docs set, create a map and count each term. But i need a solution that would also work with a document set size of 100.000 or 500.000. Is there a feature in Lucene i can use to obtain this term vector? If there is no such feature, how would a data structure look like someone can create at index time to obtain such a term vector easily and fast? I'm not that Lucene expert so i'am sorry if the solution is obvious or trivial.

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • How much of STL is too much?

    - by Darius Kucinskas
    I am using a lot of STL code with std::for_each, bind, and so on, but I noticed that sometimes STL usage is not good idea. For example if you have a std::vector and want to do one action on each item of the vector, your first idea is to use this: std::for_each(vec.begin(), vec.end(), Foo()) and it is elegant and ok, for a while. But then comes the first set of bug reports and you have to modify code. Now you should add parameter to call Foo(), so now it becomes: std::for_each(vec.begin(), vec.end(), std::bind2nd(Foo(), X)) but that is only temporary solution. Now the project is maturing and you understand business logic much better and you want to add new modifications to code. It is at this point that you realize that you should use old good: for(std::vector::iterator it = vec.begin(); it != vec.end(); ++it) Is this happening only to me? Do you recognise this kind of pattern in your code? Have you experience similar anti-patterns using STL?

    Read the article

  • Matlab - Propagate points orthogonally on to the edge of shape boundaries

    - by Graham
    Hi I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking orthogonal unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • Transfer data between C++ classes efficiently

    - by David
    Hi, Need help... I have 3 classes, Manager which holds 2 pointers. One to class A another to class B . A does not know about B and vise versa. A does some calculations and at the end it puts 3 floats into the clipboard. Next, B pulls from clipboard the 3 floats, and does it's own calculations. This loop is managed by the Manager and repeats many times (iteration after iteration). My problem: Now class A produces a vector of floats which class B needs. This vector can have more than 1000 values and I don't want to use the clipboard to transfer it to B as it will become time consumer, even bottleneck, since this behavior repeats step by step. Simple solution is that B will know A (set a pointer to A). Other one is to transfer a pointer to the vector via Manager But I'm looking for something different, more object oriented that won't break the existent separation between A and B Any ideas ? Many thanks David

    Read the article

  • Android - Correspondence between ImageView coordinates and Bitmap Pixels

    - by Matteo
    In my application I want the user to be able to select some content of an Image contained inside an ImageView. To select the content I subclassed the ImageView class making it implement the OnTouchListener so to draw over it a rectangle with borders decided by the user. Here is an example of the result of the drawing (to have an idea you can think of it as when you click with the mouse on your desktop and drag the mouse): Now I need to determine which pixels of the Bitmap image correspond to the selected part. It's kind of easy to determine which are the points of the ImageView belonging to the rectangle, but I don't know how to get the correspondent pixels, since the ImageView has a different aspect ratio than the original image. I followed the approach described especially here, but also here, but am not fully satisfied because in my opinion the correspondence made is 1 on 1 between pixels and points on the ImageView and does not give me all the correspondent pixels on the original image to the selected area. Calling hoveredRect the rectangle on the ImageView the points inside of it are: class Point { float x, y; @Override public String toString() { return x + ", " + y; } } Vector<Point> pointsInRect = new Vector<Point>(); for( int x = hoveredRect.left; x <= hoveredRect.right; x++ ){ for( int y = hoveredRect.top; y <= hoveredRect.bottom; y++ ){ Point pointInRect = new Point(); pointInRect.x = x; pointInRect.y = y; pointsInRect.add(pointInRect); } } How can I obtain a Vector<Pixels> pixelsInImage containing the correspondent pixels?

    Read the article

  • Accessing Members of Containing Objects from Contained Objects.

    - by Bunkai.Satori
    If I have several levels of object containment (one object defines and instantiates another object which define and instantiate another object..), is it possible to get access to upper, containing - object variables and functions, please? Example: class CObjectOne { public: CObjectOne::CObjectOne() { Create(); }; void Create(); std::vector<ObjectTwo>vObejctsTwo; int nVariableOne; } bool CObjectOne::Create() { CObjectTwo ObjectTwo(this); vObjectsTwo.push_back(ObjectTwo); } class CObjectTwo { public: CObjectTwo::CObjectTwo(CObjectOne* pObject) { pObjectOne = pObject; Create(); }; void Create(); CObjectOne* GetObjectOne(){return pObjectOne;}; std::vector<CObjectTrhee>vObjectsTrhee; CObjectOne* pObjectOne; int nVariableTwo; } bool CObjectTwo::Create() { CObjectThree ObjectThree(this); vObjectsThree.push_back(ObjectThree); } class CObjectThree { public: CObjectThree::CObjectThree(CObjectTwo* pObject) { pObjectTwo = pObject; Create(); }; void Create(); CObjectTwo* GetObjectTwo(){return pObjectTwo;}; std::vector<CObjectsFour>vObjectsFour; CObjectTwo* pObjectTwo; int nVariableThree; } bool CObjectThree::Create() { CObjectFour ObjectFour(this); vObjectsFour.push_back(ObjectFour); } main() { CObjectOne myObject1; } Say, that from within CObjectThree I need to access nVariableOne in CObjectOne. I would like to do it as follows: int nValue = vObjectThree[index].GetObjectTwo()->GetObjectOne()->nVariable1; However, after compiling and running my application, I get Memory Access Violation error. What is wrong with the code above(it is example, and might contain spelling mistakes)? Do I have to create the objects dynamically instead of statically? Is there any other way how to achieve variables stored in containing objects from withing contained objects?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >