Search Results

Search found 619 results on 25 pages for 'edges'.

Page 13/25 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • Edge Detection on Screen

    - by user2056745
    I have a edge collision problem with a simple game that i am developing. Its about throwing a coin across the screen. I am using the code below to detect edge collisions so i can make the coin bounce from the edges of the screen. Everything works as i want except one case. When the coin hits left edge and goes to right edge the system doesnt detect the collision. The rest cases are working perfectly, like hitting the right edge first and then the left edge. Can someone suggest a solution for it? public void onMove(float dx, float dy) { coinX += dx; coinY += dy; if (coinX > rightBorder) { coinX = ((rightBorder - coinX) / 3) + rightBorder; } if (coinX < leftBorder) { coinX = -(coinX) / 3; } if (coinY > bottomBorder) { coinY = ((bottomBorder - coinY) / 3) + bottomBorder; } invalidate(); }

    Read the article

  • Libgdx detect when player is outside of screen

    - by Rockyy
    Im trying to learn libGDX (coming from XNA/MonoDevelop), and I'm making a super simple test game to get to know it better. I was wondering how to detect if the player sprite is outside of the screen and make it so it is impossible to go outside of the screen edges. In XNA you could do something like this: // Prevent player from moving off the left edge of the screen if (player.Position.X < 0) player.Position = new Vector2(0, player.Position.Y); How is this achieved in libgdx? I think it's the Stage that handles the 2D viewport in libgdx? This is my code so far: private Texture texture; private SpriteBatch batch; private Sprite sprite; @Override public void create () { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); texture = new Texture(Gdx.files.internal("player.png")); sprite = new Sprite(texture); sprite.setPosition(w/2 -sprite.getWidth()/2, h/2 - sprite.getHeight()/2); } @Override public void render () { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); if(Gdx.input.isKeyPressed(Input.Keys.LEFT)){ if(Gdx.input.isKeyPressed(Input.Keys.CONTROL_LEFT)) sprite.translateX(-1f); else sprite.translateX(-10.0f); } if(Gdx.input.isKeyPressed(Input.Keys.RIGHT)){ if(Gdx.input.isKeyPressed(Input.Keys.CONTROL_LEFT)) sprite.translateX(1f); else sprite.translateX(10f); } batch.begin(); sprite.draw(batch); batch.end(); }

    Read the article

  • From .psd to working HTML and CSS - help me suck less

    - by kevinmajor1
    I am not much of a designer. My strength lies in coding. That said, I'm often forced into the role of "The Man," responsible for all aspects of site creation. So, that said I'm wondering if the pros can give me tips/solutions/links to tutorials to my main questions. Resolution. What should I aim for? What are the lower and upper edges I should be aware of? I know that systems like 960 Grid were popular recently. Is that the number I should still aim for? Slicing up a .psd - are there any tricks I should know? I've always found it difficult to get my slices pixel perfect. I'm also really slow at it. I must be looking at it wrong, or missing something fundamental. The same goes for text. Layouts are always filled with the classic "Lorem...", but I can never seem to get real content to fit quite as well on the screen. The advanced (to me, anyway) looking things, like a part of a logo/image overlaying what looks like a content area. How does one do that? How do layouts change/are informed by the decision to go fixed or liquid? Again, any tips/tricks/suggestions/tutorials you can share would be greatly appreciated.

    Read the article

  • Scaling background without scaling foreground in platformer?

    - by David Xu
    I'm currently developing a platform game and I've run into a problem with scaling resolutions. I want a different resolution of the game to still display the foreground unscaled (characters, tiles, etc) but I want the background to be scaled to fit into the window. To explain this better, my viewport has 4 variables: (x, y, width, height) where x and y are the top left corner and width and height are the dimensions. These can be either 800x600, 1024x768 or 1280x960. When I design my levels, I design everything for the highest resolution (1280x960) and expect the game engine to scale it down if a user is running in a lower resolution. I have tried the following to make it work but nothing I've come up with solves it so far: scale = view->width/1280; drawX = x * scale; drawY = y * scale; (this makes the translation too small for low resolution) and scale = view->width/1280; bgWidth = background->width*scale; bgHeight = background->height*scale; drawX = x + background->width/2 - bgWidth/2; drawY = y + background->height/2 - bgHeight/2; (this makes the translation completely wrong at the edges of the map) The thing is, no matter what resolution the game is run at, the map remains the same size, and the foreground is unscaled. (With a lower resolution you just see less of the foreground in the viewport) I was wondering if anyone had any idea how to solve this problem? Thank you in advance!

    Read the article

  • create a simple game board android

    - by user2819446
    I am a beginner in Android and I want to create a very simple 2D game. I've already programmed a Tic-Tac-Toe game. The drawing of the game board and connecting it with my game and input logic was quite difficult (as it was done separately, canvas drawing, calculating positions, etc). By now I figured out that there must be a simpler way. All I want is a simple grid; something like this: http://www.blelb.com/deutsch/blelbspots/spot29/images/hermannneg.gif. The edges should be visible and black, and each cell editable, containing either an image or nothing, so I can detect if the player is on that cell or not, move it... Think of it as Chess or something similar. Searching the internet during the last days, I am a bit overwhelmed of all the different options. After all, I think Gridview or Gridlayout is what I am searching for, but I'm still stuck. I hope you can help me with some good advice or maybe a link to a nice tutorial. I have checked several already, and none were exactly what I was searching for.

    Read the article

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

  • Social Analytics in your current data

    - by Dan McGrath
    By now everyone is aware of the massive boom in social-networking (Twitter, Facebook, LinkedIn) and obviously a big part of its business model revolves around being able to mine this data to create information that can be used to make money for someone. Gartner has identified 'Social Analytics' as one of the top 10 strategic technologies for 2011. Has anyone looked at their existing data structures to determine if they could extract a social graph and then perform further data mining against this? How does it fit in with your other strategic development strategies? What information are you trying to extract from the data? Take for example, a bank. They could conceivably determine a social graph through account relationships and transactions. Obviously there would be open edges on the graph where funds enter/leave the institute, but that shouldn't detract from the usefulness of the data. I'm looking for actual examples with the answers, as well as why/how they did it. References to other sites will be greatly appreciated. Note: I'm not at all referring to mining data out of actual social networks.

    Read the article

  • How can I keep my keyboard clean?

    - by Billy ONeal
    I know there are lots of articles out there on getting a keyboard clean, but I'd like to prevent my keyboard from getting nasty in the first place. The biggest problem isn't anything like food particles or liquids -- my keyboards almost instantly get a coating of finger oils and dead skin cells around the edges of the keys where my fingers rest, and the result is quite nasty. I have to clean my keyboards constantly to keep the problem in check. I'm wondering if there are any good ways to reduce this kind of buildup on my keyboard.

    Read the article

  • Diamond-square terrain generation problem

    - by kafka
    I've implemented a diamond-square algorithm according to this article: http://www.lighthouse3d.com/opengl/terrain/index.php?mpd2 The problem is that I get these steep cliffs all over the map. It happens on the edges, when the terrain is recursively subdivided: Here is the source: void DiamondSquare(unsigned x1,unsigned y1,unsigned x2,unsigned y2,float range) { int c1 = (int)x2 - (int)x1; int c2 = (int)y2 - (int)y1; unsigned hx = (x2 - x1)/2; unsigned hy = (y2 - y1)/2; if((c1 <= 1) || (c2 <= 1)) return; // Diamond stage float a = m_heightmap[x1][y1]; float b = m_heightmap[x2][y1]; float c = m_heightmap[x1][y2]; float d = m_heightmap[x2][y2]; float e = (a+b+c+d) / 4 + GetRnd() * range; m_heightmap[x1 + hx][y1 + hy] = e; // Square stage float f = (a + c + e + e) / 4 + GetRnd() * range; m_heightmap[x1][y1+hy] = f; float g = (a + b + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y1] = g; float h = (b + d + e + e) / 4 + GetRnd() * range; m_heightmap[x2][y1+hy] = h; float i = (c + d + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y2] = i; DiamondSquare(x1, y1, x1+hx, y1+hy, range / 2.0); // Upper left DiamondSquare(x1+hx, y1, x2, y1+hy, range / 2.0); // Upper right DiamondSquare(x1, y1+hy, x1+hx, y2, range / 2.0); // Lower left DiamondSquare(x1+hx, y1+hy, x2, y2, range / 2.0); // Lower right } Parameters: (x1,y1),(x2,y2) - coordinates that define a region on a heightmap (default (0,0)(128,128)). range - basically max. height. (default 32) Help would be greatly appreciated.

    Read the article

  • I can't "unmaximize" my window

    - by Beska
    I've got a windows app (I don't think it matters which one, but in case you're wondering, it's SQL Server Profiler) that I can't put back into "windowed" mode. I can maximize or minimize it, either by right-clicking on the task bar and selecting maximize...or if the window is already maximized, I can click the minimize button to minimize it... The problem is when I click the middle button...the one that toggles between maximized and "windowed" mode, the windowed mode just makes it disappear. The program is still running fine, and I can bring it back up (maximized) by selecting it in the task bar. It doesn't seem to be hanging out on any of the edges of the screen...far as I can tell, it's just not there. And, of course, the app is "smart" enough to remember its status, so restarting the app doesn't help. Has anyone seen this? Know how to fix it?

    Read the article

  • Configuring Apache reverse proxy

    - by Martin
    I have loadbalancer server and edges. I am trying to configure reverse proxy in order to hide the backend servers PL1,2,3. PL 1,2,3 are not located in same subnet. They are located in different locations. PL1 Lb1 -> PL2 PL3 I tried to configure Apache reverse proxy but it is not sending request to PL1,2,3. Reverse proxy worked only when I configured apache to send request to local server on other port. ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /PL1 http://PL1server.com/ ProxyPassReverse /PL1 http://PL1server.com/ The above configuration did not worked. Could you help me to solve the issue. Or is there other proxy types like Squid,Socks5 to solve this issue. Does the reverse proxy fails if we use IP address or domain URL in ProxyPass and ProxyPassReverse ?

    Read the article

  • Need help understanding XNA 4.0 BoundingBox vs BoundingSphere Intersection

    - by nerdherd
    I am new to both game programming and XNA, so I apologize if I'm missing a simple concept or something. I have created a simple 3D game with a player and a crate and I'm working on getting my collision detection working properly. Right now I am using a BoundingSphere for my player, and a BoundingBox for the crate. For some reason, XNA only detects a collision when my player's sphere touches the front face of the crate. I'm rendering all the BoundingSpheres and BoundingBoxes as wire frames so I can see what's going on, and everything visually appears to be correct, but I can't figure out this behavior. I have tried these checks: playerSphere.Intersects(crate.getBoundingBox()) playerSphere.Contains(crate.getBoundingBox(), ContainmentType.Intersects) playerSphere.Contains(crate.getBoundingBox()) != ContainmentType.Disjoint But they all seem to produce the same behavior (in other words, they are only true when I hit the front face of the crate). The interesting thing is that when I use a BoundingSphere for my crate the collision is detected as I would expect, but of course this makes the edges less accurate. Any thoughts or ideas? Have I missed something about how BoundingSpheres and BoundingBoxes compute their intersections? I'd be happy to post more code or screenshots to clarify if needed. Thanks!

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • How to calibrate a HDTV that is being used as monitor?

    - by Mike
    I have a HDTV I am using as a second monitor. The first monitor, is a real computer monitor, not a TV and has a fantastic image. The second one, the HDTV has a good image, but it is a little bit blurred and has the problem you can see on the picture below, a kind of red halo around the edges. I think it has something to do with red contrast. Other colors show this problem too, specially the green. The problem is that the TV has no contrast adjustment. Instead it has something called IRE 10 points and IRE 2 points. Taking IRE 10 for example, it has 4 controls for each of the 10 points: luminosity, R, G and B. I could not find a page where I can understand what this IRE is and how should I adjust this. Can someone tell me how should I proceed to calibrate this TV for best picture? thanks for any help.

    Read the article

  • Game Trees Conceptual Question

    - by Chris Corbin
    I am struggling to conceptually understand a question in a programming assignment for an algorithms class. The problem is dealing with a fictitious 2 player game, named Easy. The rules of the game are simple; each player may chose one of 4 integers {0-3} after which that integer is not available for the other player. The catch is, a player picks {0} it means they quit. The objective is for Player 1 to get {1} and Player 2 to get {2}, in which case they may win, however if both or neither succeed, then the game ends in a draw. I have been asked to draw the game tree for Easy, showing all nodes, which they explained as 4! = 24. Labeling the edges, which represent moves (selecting a number) and the leaves with who won (1 means Player 1 won, -1 means Player 2 won, and 0 means a tie). I have drawn out a game tree, which I believe is correct, however I am not 100% certain hence I am asking the question. My game tree only has 16 leaves. I am thinking that when a player picks {0}, and then quits, the game tree stops there? I don't see how it is possible to get to 24 leaves? Any help would be greatly appreciate, and if you need more information I would be happy to provide it. Thanks

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Worst SysAdmin Accident

    - by Ward
    In line with the question about Best sysadmin accident, what's the worst accident you've been involved in? Unlike the previous question, I mean "worst" in the sense of most system damage or actual harm to people. I'll start with mine: We have two remote wiring closets that are at the end of a 100-foot corridor which has a metal grate for the floor. After we had Cat6 cable installed, the contractors cleaned up all the debris that dropped through the grating to the concrete 3 feet below. A co-worker and I entered the corridor to check on the progress one day but were distracted and didn't notice that a piece of grating had been moved aside. My buddy stepped into air and his chest slammed into the steel crossbar. He was winded and sore enough to take a couple days off, but luckily the steel beam had rounded edges and the size of the opening was such that he didn't smack his head into it or the floor below. Obviously we learned that areas where the floor is partially removed need to be flagged.

    Read the article

  • resize image without image quality reduction

    - by ali
    In web design , it's usually needed to design an image for example in Photoshop and then use multiple sizes of it. but I don't understand something here : When I resize the image (PNG or JPG) and reduce the dimensions of that in Photoshop , the image quality extremely gets reduced and the edges become messy while resizing the image in a simple software like Microsoft Paint gives a really better output! So what's the reason ? Is there a trick in Photoshop for image resizing which I've missed? Thanks for your help. UPDATE: I resize in this way : image image size , then enter new dimensions , all of checkboxes are checked , and have tried all of resample modes including Bicubic sharper

    Read the article

  • How to disable "N" Wireless Mode RTL8192 (Thinkpad Edge 15 Core i5) in natty

    - by Gustavo Rubio
    I've seen many owners of thinkpad edges which are supossed to be linux-friendly having problems with wireless adapter. I've found several links inside askubuntu and in ubuntuforums which have a lot of work-arounds for those problems, mine seems to be wierd though. I use my laptop on both my office and at home. At home I have a router which is A/B/G and here at home the wireless connection works just fine, using a WEP key. But in work I have a B/G/N wireless router and it doesn't work, my guess is that this adapter works with N modes but somehow this is buggy in the bundled driver in natty. I've tried to disable the "N" mode in the router but that didn't work. Later I went to realtek website, downloaded their driver and compiled myself, kinda seems to work most of the time but sometimes some websites keep trying to load or load just parts of it and images start to look like their links are broken and so on, much like what you get when you were loading a page and suddenly the connection is lost. This problem, as I said, is only using the realtek driver from their website. Dmesg gives me this a lot of these: [ 5869.049454] rtl8192se_update_ratr_table: ratr_index=0 ratr_table=0x00000ff5 [ 5879.240563] DHCP pkt src port:68, dest port:67!! So I thougth I might as well switch back to the original driver which seems to work just fine on A/B/G wireless networks but not on N networks so if anybody knows how to disable that mode from within the driver please let us know :) Thanks in advance! PS: I do found a link to a similar question and it was answered but let me remind you I'm NOT using the intel version of wireless for my thinkpad but the realtek (RTL8192SvB)

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • How to blend multiple normal maps?

    - by János Turánszki
    I want to achieve a distortion effect which distorts the full screen. For that I spawn a couple of images with normal maps. I render their normal map part on some camera facing quads onto a rendertarget which is cleared with the color (127,127,255,255). This color means that there is no distortion whatsoever. Then I want to render some images like this one onto it: If I draw one somewhere on the screen, then it looks correct because it blends in seamlessly with the background (which is the same color that appears on the edges of this image). If I draw another one on top of it then it will no longer be a seamless transition. For this I created a blendstate in directX 11 that keeps the maximum of two colors, so it is now a seamless transition, but this way, the colors lower than 127 (0.5f normalized) will not contribute. I am not making a simulation and the effect looks quite convincing and nice for a game, but in my spare time I am thinking how I could achieve a nicer or a more correct effect with a blend state, maybe averaging the colors somehow? I I did it with a shader, I would add the colors and then I would normalize them, but I need to combine arbitrary number of images onto a rendertarget. This is my blend state now which blends them seamlessly but not correctly: D3D11_BLEND_DESC bd; bd.RenderTarget[0].BlendEnable=true; bd.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; bd.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; bd.RenderTarget[0].BlendOp = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; bd.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; bd.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].RenderTargetWriteMask = 0x0f; Is there any way of improving upon this? (PS. I considered rendering each one with a separate shader incementally on top of each other but that would consume a lot of render targets which is unacceptable)

    Read the article

  • Laptop connected to external monitor results in desktop overflow

    - by Andrew Fox
    I have a laptop (MSi FX600) connected to an external monitor (Toshiba 19AV500U) with the screen resolution set to the monitors recommended setting of 1440x900. The problem is the desktop is overflowing the display, meaning all edges of the desktop are cut off, so I cannot see the close, minimize buttons on the top and I only see half of the taskbar at the bottom. I have tried many different resolutions but none of them solve the problem. I have had this same problem when connecting to other external monitors in the past, but not all of them, it seems to be fairly random. I have tried to find a way to manually adjust the resolution but I cannot see a way to do this in my settings. All windows updates are up to date, I am connecting through an HDMI cable and I have it set to display only on the external monitor and not extended desktop.

    Read the article

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >