Search Results

Search found 1909 results on 77 pages for 'adjacency matrix'.

Page 59/77 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • How to Upgrade Windows 7 Easily (And Understand Whether You Should)

    - by The Geek
    Just the other day I was trying to use Remote Desktop to connect from my laptop in the living room to the desktop downstairs, when I realized that I couldn’t do it because the desktop was running Windows Home Premium—that’s when I realized we’d never covered how to upgrade Windows, so here you are. You can upgrade from any version of Windows to the next version up, but it’s obviously going to cost a bit of money, and there’s a very good chance that you’ll have no reason to upgrade. Keep reading for the differences between the versions, whether you should bother upgrading, and how to actually do it Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Take Better Panoramic Photos with Any Camera Make Creating App Tabs Easier in Firefox Peach and Zelda Discuss the Benefits and Perks of Being Kidnapped [Video] The Life of Gadgets in Price and Popularity [Infographic] Apture Highlights Turns Your Cursor into a Search Tool Add Classic Sci-Fi Goodness to Your Desktop with the Matrix Theme for Windows 7

    Read the article

  • WebCenter 11.1.1.8 Certified with E-Business Suite 12.2

    - by Steven Chan (Oracle Development)
    Oracle WebCenter Suite is an integrated suite of Fusion Middleware 11gR1 tools used to create web sites and portals using service-oriented architecture (SOA).  Applications adapters are also available. WebCenter Portal 11.1.1.8 is now certified with Oracle E-Business Suite Release 12.2.  This complements our existing certifications of WebCenter Portal 11.1.1.8 with EBS 12.0 and 12.1.  WebCenter Portal 11.1.1.8 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.8.0, also known as FMW 11gR1 Patchset 7.  Certified Platforms Oracle WebCenter Portal is certified to run on any operating system for which Oracle WebLogic Server 11g is certified. For information on operating systems supported by Oracle WebLogic Server 11g and Oracle WebCenter Portal, refer to the 'Oracle Fusion Middleware on WebLogic Server - System Certification' in the Oracle Fusion Middleware 11g Release 1 (11.1.1.x) Certification Matrix. Integration with Oracle WebCenter Portal involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component. Migrating to Oracle WebCenterIf you're currently using Oracle Portal, you should be aware that Portal is now in maintenance mode.  Updates with bug fixes will continue to be produced, but you should consider migrating to Oracle WebCenter for ongoing new features. References Using WebCenter 11.1.1 with Oracle E-Business Suite Release 12.2 (Note 1332645.1) WebCenter Portal 11g Release 1 (11.1.1.8) Documentation Related Articles Oracle E-Business Suite 12.2 Now Available WebCenter Portal 11.1.1.8 Certified with E-Business Suite 12

    Read the article

  • New Write Flash SSDs and more disk trays

    - by Steve Tunstall
    In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet. The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these. They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.    Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you? Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), but you only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you are going to have two different speeds of drives, in other words you want to mix 15K speed and 7,200 speed drives in the same system, I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • Spherical to Cartesian Coordinates

    - by user1258455
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Why isn't my other two constant buffers being updated to the shader?

    - by Paul Ske
    I posted previously before about my two dynamic buffers not being dynamically updating the constant shader. The tessellation buffer isn't working because I have to manually update the tessellation factor inside the hull shader. I believe the camera position isn't updating either because when I perform distance adaptation the far edges are more tessellated then the what's truly in front of the camera. I have all the buffers set to dynamic. Inside the render loop I have them set as: ID3D11Buffer *multiBuffers[3]; devcon->VSSetConstantBuffers(0,3,multiBuffers); ... devcon->DSSetConstantBuffers(0,3,multiBuffers); I only got that from a directX Sample. Inside the shader file I have the three cbuffer structs. cbuffer ConstantBuffer { float4x4 WorldMatrix; float4x4 viewMatrix; float4x4 projectionMatrix; float4x4 modelWorldMatrix; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } Am I missing something or would anyone know why wouldn't my buffers update to the shader file?

    Read the article

  • How can I create an orthographic display that handles different screen dimensions?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • XNA 3D coordinates seem off

    - by Peteyslatts
    I'm going through a book, and the example it gave me seems like is should work, but when I try and implement it, it falls short. My Camera class takes three vectors in to generate View and Projection matrices. I'm giving it a position vector of (0,0,5), a target vector of Vector.Zero and a top vector (which way is up) of Vector.Up. My Three vertices are placed at (0,1,0), (-1,-1,0), (1,-1,0). It seems like it should work because the vertices are centered around the origin, and thats where I'm telling the camera to look but when I run the game, the only way to get the camera to see the vertices is to set its position to (0,0,-5) and even then the triangle is skewed. Not sure what's wrong here. Any suggestions would be helpful. Just to make sure I've given you guys everything (I don't think these are important as the problem seems to be related to the coordinates, not the ability of the game to draw them): I'm using a VertexBuffer and a BasicEffect. My render code is as follows: effect.World = Matrix.Identity; effect.View = camera.view; effect.Projection = camera.projection; effect.VertexColorEnabled = true; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserPrimitives<VertexPositionColor> (PrimitiveType.TriangleStrip, verts, 0, 1); }

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Find methods related to testcases in Java

    - by user3623718
    I want to automatically change some methods in the program. These methods contain some compiler error and my program aims to fix these compiler errors. After fixing compiler errors I need to run test cases related to the changed method (or class) to know it is correct and if not which test cases failed. As the programs under investigation are very big, I only need to run test cases related to changes. As an example, if I change one method, then I need to only run test cases related to this method. Therefore, what I need is to programmatically be able to find test cases related to each method, and class. It is also useful if there is a tool that can do that for me. As an example, a tool which creates a matrix shows each test case is related to which method(s) One easy way to do that is to run all test cases and save functions they accessed. However, the problem is at the beginning the input program contains compiler error and it is not possible to run test cases because of these compiler error. Please let me know what is the best way to do that. An API or a tool that I can be used programmatically is the best for me.

    Read the article

  • Proper way to encapsulate a Shader into different modules

    - by y7haar
    I am planning to build a Shader system which can be accessed through different components/modules in C++. Each component has its own functionality like transform-relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example: I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture-handler, toon-shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms/attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON/XML and parse that file in C++ to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • Installation questions

    - by user12609425
    I've gotten a couple more questions about the installation process for Ops Center. "Can I install on any SPARC / X86 based platform?" Ops Center can run on Oracle Solaris on either architecture, or on Linux. The Certified Systems Matrix lists the supported OSes, and the Linux and Solaris install guides go into more detail about the hardware and OS requirements. "Can we install it on local zones or LDOMS?" Zones, yes; LDOMS, sort of. You can install the Enterprise Controller in a local zone. There are a few caveats, which are explained in the Preparing a Non-Global Zone section. You can also install a Proxy Controller in an Oracle Solaris 11 zone. Agent Controllers, which are the part of the infrastructure that's installed on managed systems, can be put on zones or LDOMS. "Do we need any dedicated network ports from all agent monitoring systems?"  Yes. The port requirements are covered in the Network Port Requirements and Protocols table, which is in the feature reference guide as well as in the install guides.

    Read the article

  • What would be the fastest way of storing or calculating legal move sets for chess pieces?

    - by ioSamurai
    For example if a move is attempted I could just loop through a list of legal moves and compare the x,y but I have to write logic to calculate those at least every time the piece is moved. Or, I can store in an array [,] and then check if x and y are not 0 then it is a legal move and then I save the data like this [0][1][0][0] etc etc for each row where 1 is a legal move, but I still have to populate it. I wonder what the fastest way to store and read a legal move on a piece object for calculation. I could use matrix math as well I suppose but I don't know. Basically I want to persist the rules for a given piece so I can assign a piece object a little template of all it's possible moves considering it is starting from it's current location, which should be just one table of data. But is it faster to loop or write LINQ queries or store arrays and matrices? public class Move { public int x; public int y; } public class ChessPiece : Move { private List<Move> possibleMoves { get; set; } public bool LegalMove(int x, int y){ foreach(var p in possibleMoves) { if(p.x == x && p.y == y) { return true; } } } } Anyone know?

    Read the article

  • Checking validation of entries in a Sudoku game written in Java

    - by Mico0
    I'm building a simple Sudoku game in Java which is based on a matrix (an array[9][9]) and I need to validate my board state according to these rules: all rows have 1-9 digits all columns have 1-9 digits. each 3x3 grid has 1-9 digits. This function should be efficient as possible for example if first case is not valid I believe there's no need to check other cases and so on (correct me if I'm wrong). When I tried doing this I had a conflict. Should I do one large for loop and inside check columns and row (in two other loops) or should I do each test separately and verify every case by it's own? (Please don't suggest too advanced solutions with other class/object helpers.) This is what I thought about: Main validating function (which I want pretty clean): public boolean testBoard() { boolean isBoardValid = false; if (validRows()) { if (validColumns()) { if (validCube()) { isBoardValid = true; } } } return isBoardValid; } Different methods to do the specific test such as: private boolean validRows() { int rowsDigitsCount = 0; for (int num = 1; num <= 9; num++) { boolean foundDigit = false; for (int row = 0; (row < board.length) && (!foundDigit); row++) { for (int col = 0; col < board[row].length; col++) { if (board[row][col] == num) { rowsDigitsCount++; foundDigit = true; break; } } } } return rowsDigitsCount == 9 ? true : false; } I don't know if I should keep doing tests separately because it looks like I'm duplicating my code.

    Read the article

  • How to setup my texture cordinates correctly in GLSL 150 and OpenGL 3.3?

    - by RubyKing
    I'm trying to do texture mapping in GLSL 150 and OpenGL 3.3 Here are my shaders I've tried my best to get this correct as possible hopefully this is :) I'm guessing you want to know what the problem is well my texture shows but not in its fullest form just one section of it not the full texture on the quad. All I can think of is its the texture cordinates in the main.cpp which is at the bottom of this post. FRAGMENT SHADER #version 150 in vec2 Texcoord_VSPS; out vec4 color; // Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler; //Main Entry Point void main() { // Output color = color of the texture at the specified UV color = texture2D( myTextureSampler, Texcoord_VSPS ); } VERTEX SHADER #version 150 //Position Container in vec3 position; //Container for TexCoords attribute vec2 Texcoord0; out vec2 Texcoord_VSPS; //out vec2 ex_texcoord; //TO USE A DIFFERENT COORDINATE SYSTEM JUST MULTIPLY THE MATRIX YOU WANT //Main Entry Point void main() { //Translations and w Cordinates stuff gl_Position = vec4(position.xyz, 1.0); Texcoord_VSPS = Texcoord0; } LINK TO MAIN.CPP http://pastebin.com/t7Vg9L0k

    Read the article

  • Getting Requirements Right

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/10/28/getting-requirements-right.aspxI had a meeting with a stakeholder who stated “I bet you wish I wasn’t in these meetings”.  She said this because she kept changing what we thought the end product should look like.  My reply was that it would be much worse if she came in at the end of the project and told us we had just built the wrong solution. You have to take the time to get the requirements right.  Be honest with all involved parties as to the amount of time it is taking to refine the requirements.  The only thing worse than wrong requirements is a surprise in budget overages.  If you give open visibility to your progress then management has the ability to shift priorities if needed. In order to capture the best requirements use different approaches to help your stakeholders to articulate their needs.  Use mock ups and matrix spread sheets to allow them to visualize and confirm that everyone has the same understanding.  The goals isn’t to record every last detail, but to have the major landmarks identified so there are fewer surprises along the way. Help the team members to understand that you all have the same goal.  You want to create the best possible solution for the given business problem.  If you do this everyone involved will do there best to outline a picture of what is to be built and you will be able to design an appropriate solution to fill those needs more easily. Technorati Tags: requirements gathering,PSC Group,PSC

    Read the article

  • Scale an image with unscalable parts

    - by Uko
    Brief description of problem: imagine having some vector picture(s) and text annotations on the sides outside of the picture(s). Now the task is to scale the whole composition while preserving the aspect ratio in order to fit some view-port. The tricky part is that the text is not scalable only the picture(s). The distance between text and the image is still relative to the whole image, but the text size is always a constant. Example: let's assume that our total composition is two times larger than a view-port. Then we can just scale it by 1/2. But because the text parts are a fixed font size, they will become larger than we expect and won't fit in the view-port. One option I can think of is an iterative process where we repeatedly scale our composition until the delta between it and the view-port satisfies some precision. But this algorithm is quite costly as it involves working with the graphics and the image may be composed of a lot of components which will lead to a lot of matrix computations. What's more, this solution seems to be hard to debug, extend, etc. Are there any other approaches to solving this scaling problem?

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • DFS Backtracking with java

    - by Cláudio Ribeiro
    I'm having problems with DFS backtracking in an adjacency matrix. Here's my code: (i added the test to the main in case someone wants to test it) public class Graph { private int numVertex; private int numEdges; private boolean[][] adj; public Graph(int numVertex, int numEdges) { this.numVertex = numVertex; this.numEdges = numEdges; this.adj = new boolean[numVertex][numVertex]; } public void addEdge(int start, int end){ adj[start-1][end-1] = true; adj[end-1][start-1] = true; } List<Integer> visited = new ArrayList<Integer>(); public Integer DFS(Graph G, int startVertex){ int i=0; if(pilha.isEmpty()) pilha.push(startVertex); for(i=1; i<G.numVertex; i++){ pilha.push(i); if(G.adj[i-1][startVertex-1] != false){ G.adj[i-1][startVertex-1] = false; G.adj[startVertex-1][i-1] = false; DFS(G,i); break; }else{ visited.add(pilha.pop()); } System.out.println("Stack: " + pilha); } return -1; } Stack<Integer> pilha = new Stack(); public static void main(String[] args) { Graph g = new Graph(6, 9); g.addEdge(1, 2); g.addEdge(1, 5); g.addEdge(2, 4); g.addEdge(2, 5); g.addEdge(2, 6); g.addEdge(3, 4); g.addEdge(3, 5); g.addEdge(4, 5); g.addEdge(6, 4); g.DFS(g, 1); } } I'm trying to solve the euler path problem. the program solves basic graphs but when it needs to backtrack, it just does not do it. I think the problem might be in the stack manipulations or in the recursive dfs call. I've tried a lot of things, but still can't seem to figure out why it does not backtrack. Can somebody help me ?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >