Search Results

Search found 3789 results on 152 pages for 'pixel tracking'.

Page 50/152 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • How to track many in-game statistics

    - by Alex Schearer
    I am looking to track many in-game events, e.g. the score of each move, how many moves are taken, what types of moves, etc. A lot of stats can simply be tracked with a counter. In some cases I need to aggregate data in order to calculate the value (e.g. most common move). How are you tracking in-game stats for your games? How do you avoid creating a class with tens or hundreds of fields? How do you avoid littering the code with tracking invocations? How do you abstract the aggregate data so as to avoid rewriting it for each scenario?

    Read the article

  • Analytics - how to tell where converted traffic came from?

    - by Eric
    I must be missing something obvious. I have Analytics set up with conversion tracking (goals), and I had 4 customers complete the goals yesterday. I'm trying to find out where those 4 customers came from (organic search? if organic, what keywords? etc) but I can't figure out how to do that in Adwords. When I click into the goal tracking overview, I see my 4 customers and it breaks it down so I see that 3 of them came from adwords (cpc) and 1 of them came from organic. I'd like to know exactly what ads brought the traffic and what keywords on the organic search led them to me. How can I do this? It seems like a simple request... but I can't figure it out... thanks for your help!

    Read the article

  • Perform Better with Microsoft Project 2007 and Visio 2007. Now 20% OFF!

    To increase productivity, it is essential that your employees have easy access to productivity tools that make the best use of their resources. Microsoft Office Project 2007 and Microsoft Visio 2007 now work Better Together by: · Reducing repetitive tasks with diagrams that refresh automatically with Data Refresh and Data Connect · Tracking the source of issues quickly with the Track Drivers feature · Tracking budgets with the new Costs Resources field · Building ready-to-use reports with the Visual reports engine · Share and manage documents on collaborative workspaces with Windows SharePoint Services For a limited time, you can now license Microsoft Office Project 2007 system and Microsoft Office Visio 2007 system under the No Better Time offer at a 20% discount for Open Value and a 15% discount for Desktop SKUs. Hurry, there’s No Better Time for you to buy Microsoft Office Project 2007 and Microsoft Office Visio 2007. Click here to view the Terms and Conditions. span.fullpost {display:none;}

    Read the article

  • Determining whether a visitor reached two different pages in one visit

    - by Shaun
    I have a funnel that I would like to track. Tracking this funnel won't work with the default "goal funnel" tracking in Google due to the fact that I am mixing events and pageviews. As such, I've created a series of reports: Visits to demo pages - An inclusion filter on "Page". Triggers an Event on these pages - An inclusion filter on "Page" and "Event Category". Does not bounce - An inclusion filter on "Page" and an exclusion filter on "Exit Page" for these same pages. Reach our storefront - ?? Purchase something - An inclusion filter on "Page" and a report that shows "Transactions". At a basic level, I need to track users who reached demo pages, then reached any page on our store. Intuitively, I created a segment, used two inclusive "Page" filters (one for the demo pages and one for any page in our store), and combined them with an "AND" operator. I thought this was working until I tried to do the same thing in a dashboard widget and on a custom report. When I tried the same thing in those areas, I got zero results. I figured this might be because widgets and custom report filters function differently from segment filters (the options are different for all of them), so I tried applying my "demo page && store page" segment to a report that gave me a general page list. All I saw was a list of the specific pages. I tried simplifying things by creating a custom report that showed all visits to store pages, then applied a segment that filtered for users who visited demo pages. This got me the same numbers as my "demo page && store page" segment, but showed a list of demo pages. This has led me to believe that the "demo page && store page segment" approach and the "demo segment && store report" functionally behave the same. However, this experience has left me questioning whether they're giving me what I want. Are these methods showing me all users who reached both sets of pages? Is there a better/easier/more standard way of doing this aside from looking at visitor flow reports? I'm trying to avoid a combination of custom variables/events and using the horizontal funnel approach since it would consume a large number of our limited goals and seems more complicated than is necessary for tracking this funnel.

    Read the article

  • AdWords traffic not (properly) reflected in Analytics

    - by CJM
    I have an AdWords account, which was set to use Auto-tagging of URLs. When looking at the Analytics account for that site, I couldn't find any reference to AdWords traffic either in the Advertising section or the Traffic Sources section. So I manually constructed the URL tags, and updated the Campaign Ad. Once the ad was approved and the clicks started coming through again, I could see the results in the Traffic Sources section of Analytics. In the Sources Campaigns section, my campaign was listed, and under Sources All Traffic, it was registering the same level of traffic from google/adwords. However, the Advertising AdWords section is still drawing a blank. Any ideas? Are there explicit steps needed to enable full tracking of AdWords campaigns? If it is relevant, the Adwords campaign was set up with one account, and the Analytics tracking with another, but both accounts have full access to both AdWords and Analytics.

    Read the article

  • Get exact time of each click - AdWords

    - by almo
    I have an AdWords campaign running and want to get a list with all clicks and the exact time the click occured. And I want to do this without having a tracking code on my homepage. So it needs to be via the API (AdWords/Analytics). You might think know that this is not possible. But big Bid Management tools (e.g. Kenshoo) know all details of every click and they don't have tracking code of my website (only on the success page of my contact form). How is that possible? The dimensions in AdWords only let me group the clicks at the hour level.

    Read the article

  • Google Analytics data Missing for 4 days

    - by Shumaila
    I used wrong tracking id in my E-commerce tracking code for Google analytics and missed the data for few days. To add that missing data in my account I have written a short script which manually send all orders data for four days to GA account but what my concern is date : Those orders which already placed on different dates and , when if I run my script and so it will send my missing data with current date , which I do not want. ( I want to send date when that order is actually placed) do anyone help me with this ? I am really much stuck with my work here.

    Read the article

  • Openlayers - LayerRedraw() / Feature rotation / Linestring coords

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. Or figure out how to plot a triangle based off one set of coords & a heading(see below). I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug. If I were to draw a triangle with an array of points & linestring. How would I go about facing the triangle towards the heading. I have the Lon/lat of one point and the heading. var points = new Array( new OpenLayers.Geometry.Point(lon1, lat1), new OpenLayers.Geometry.Point(lon2, lat2), new OpenLayers.Geometry.Point(lon3, lat3) ); var line = new OpenLayers.Geometry.LineString(points); Looking for any way to solve this problem Image or Line anyone know how to do either added a 100rep bounty I am really stuck with this.

    Read the article

  • Fragment shaders on a texture

    - by Snowangelic
    Hello stack overflow. I am trying to add some post-processing capabilities to a program. The rendering is done using openGL. I just want to allow the program to load some home made fragment shader and use them on the video stream. I wrote a little piece of shader using "OpenGL Shader Builder" that just turns a texture in grayscale. The shaders works well in the shader builder but I can't make it work in the main program. The screens stays all black. Here is the setup : @implementation PluginGLView - (id) initWithCoder: (NSCoder *) coder { const GLubyte * strExt; if ((self = [super initWithCoder:coder]) == nil) return nil; glLock = [[NSLock alloc] init]; if (nil == glLock) { [self release]; return nil; } // Init pixel format attribs NSOpenGLPixelFormatAttribute attrs[] = { NSOpenGLPFAAccelerated, NSOpenGLPFANoRecovery, NSOpenGLPFADoubleBuffer, 0 }; // Get pixel format from OpenGL NSOpenGLPixelFormat* pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs]; if (!pixFmt) { NSLog(@"No Accelerated OpenGL pixel format found\n"); NSOpenGLPixelFormatAttribute attrs2[] = { NSOpenGLPFANoRecovery, 0 }; // Get pixel format from OpenGL pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs2]; if (!pixFmt) { NSLog(@"No OpenGL pixel format found!\n"); [self release]; return nil; } } [self setPixelFormat:[pixFmt autorelease]]; /* long swapInterval = 1 ; [[self openGLContext] setValues:&swapInterval forParameter:NSOpenGLCPSwapInterval]; */ [glLock lock]; [[self openGLContext] makeCurrentContext]; // Init object members strExt = glGetString (GL_EXTENSIONS); texture_range = gluCheckExtension ((const unsigned char *)"GL_APPLE_texture_range", strExt) ? GL_TRUE : GL_FALSE; texture_hint = GL_STORAGE_SHARED_APPLE ; client_storage = gluCheckExtension ((const unsigned char *)"GL_APPLE_client_storage", strExt) ? GL_TRUE : GL_FALSE; rect_texture = gluCheckExtension((const unsigned char *)"GL_EXT_texture_rectangle", strExt) ? GL_TRUE : GL_FALSE; // Setup some basic OpenGL stuff glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glColor4f(1.0f, 1.0f, 1.0f, 1.0f); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Loads the shaders shader=LoadShader(GL_FRAGMENT_SHADER,"/Users/alexandremathieu/fragment.fs"); program=glCreateProgram(); glAttachShader(program, shader); glLinkProgram(program); glUseProgram(program); [NSOpenGLContext clearCurrentContext]; [glLock unlock]; image_width = 1024; image_height = 512; image_depth = 16; image_type = GL_UNSIGNED_SHORT_1_5_5_5_REV; image_base = (GLubyte *) calloc(((IMAGE_COUNT * image_width * image_height) / 3) * 4, image_depth >> 3); if (image_base == nil) { [self release]; return nil; } // Create and load textures for the first time [self loadTextures:GL_TRUE]; // Init fps timer //gettimeofday(&cycle_time, NULL); drawBG = YES; // Call for a redisplay noDisplay = YES; PSXDisplay.Disabled = 1; [self setNeedsDisplay:true]; return self; } And here is the "render screen" function wich basically...renders the screen. - (void)renderScreen { int bufferIndex = whichImage; glBindTexture(GL_TEXTURE_RECTANGLE_EXT, bufferIndex+1); glUseProgram(program); int loc=glGetUniformLocation(program, "texture"); glUniform1i(loc,bufferIndex+1); glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, image_width, image_height, GL_BGRA, image_type, image[bufferIndex]); glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, 1.0f); glTexCoord2f(0.0f, image_height); glVertex2f(-1.0f, -1.0f); glTexCoord2f(image_width, image_height); glVertex2f(1.0f, -1.0f); glTexCoord2f(image_width, 0.0f); glVertex2f(1.0f, 1.0f); glEnd(); [[self openGLContext] flushBuffer]; [NSOpenGLContext clearCurrentContext]; //[glLock unlock]; } and finally here's the shader. uniform sampler2DRect texture; void main() { vec4 color, texel; color = gl_Color; texel = texture2DRect(texture, gl_TexCoord[0].xy); color *= texel; // Begin Shader float gray=0.0; gray+=(color.r + color.g + color.b)/3.0; color=vec4(gray,gray,gray,color.a); // End Shader gl_FragColor = color; } The loading and using of shaders works since I am able to turn the screen all red with this shader void main(){ gl_FragColor=vec4(1.0,0.0,0.0,1.0); } If the shader contains a syntax error I get an error message from the LoadShader function etc. If I remove the use of the shader, everything works normally. I think the problem comes from the "passing the texture as a uniform parameter" thing. But these are my very firsts step with openGL and I cant be sure of anything. Don't hesitate to ask for more info. Thank you Stack O.

    Read the article

  • Drawing transparent glyphs on the HTML canvas

    - by Bertrand Le Roy
    The HTML canvas has a set of methods, createImageData and putImageData, that look like they will enable you to draw transparent shapes pixel by pixel. The data structures that you manipulate with these methods are pseudo-arrays of pixels, with four bytes per pixel. One byte for red, one for green, one for blue and one for alpha. This alpha byte makes one believe that you are going to be able to manage transparency, but that’s a lie. Here is a little script that attempts to overlay a simple generated pattern on top of a uniform background: var wrong = document.getElementById("wrong").getContext("2d"); wrong.fillStyle = "#ffd42a"; wrong.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); wrong.putImageData(overlay, 16, 16); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } where the fill method is setting the pixels in the lower-left half of the overlay to opaque red, and the rest to transparent black. And here’s how it renders: As you can see, the transparency byte was completely ignored. Or was it? in fact, what happens is more subtle. What happens is that the pixels from the image data, including their alpha byte, replaced the existing pixels of the canvas. So the alpha byte is not lost, it’s just that it wasn’t used by putImageData to combine the new pixels with the existing ones. This is in fact a clue to how to write a putImageData that works: we can first dump that image data into an intermediary canvas, and then compose that temporary canvas onto our main canvas. The method that we can use for this composition is drawImage, which works not only with image objects, but also with canvas objects. var right = document.getElementById("right").getContext("2d"); right.fillStyle = "#ffd42a"; right.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); var overlayCanvas = document.createElement("canvas"); overlayCanvas.width = overlayCanvas.height = 32; overlayCanvas.getContext("2d").putImageData(overlay, 0, 0); right.drawImage(overlayCanvas, 16, 16); And there is is, a version of putImageData that works like it should always have:

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • Generate texture for a heightmap

    - by James
    I've recently been trying to blend multiple textures based on the height at different points in a heightmap. However i've been getting poor results. I decided to backtrack and just attempt to recreate one single texture from an SDL_Surface (i'm using SDL) and just send that into opengl. I'll put my code for creating the texture and reading the colour values. It is a 24bit TGA i'm loading, and i've confirmed that the rest of my code works because i was able to send the surfaces pixels directly to my createTextureFromData function and it drew fine. struct RGBColour { RGBColour() : r(0), g(0), b(0) {} RGBColour(unsigned char red, unsigned char green, unsigned char blue) : r(red), g(green), b(blue) {} unsigned char r; unsigned char g; unsigned char b; }; // main loading code SDLSurfaceReader* reader = new SDLSurfaceReader(m_renderer); reader->readSurface("images/grass.tga"); // new texture unsigned char* newTexture = new unsigned char[reader->m_surface->w * reader->m_surface->h * 3 * reader->m_surface->w]; for (int y = 0; y < reader->m_surface->h; y++) { for (int x = 0; x < reader->m_surface->w; x += 3) { int index = (y * reader->m_surface->w) + x; RGBColour colour = reader->getColourAt(x, y); newTexture[index] = colour.r; newTexture[index + 1] = colour.g; newTexture[index + 2] = colour.b; } } unsigned int id = m_renderer->createTextureFromData(newTexture, reader->m_surface->w, reader->m_surface->h, RGB); // functions for reading pixels RGBColour SDLSurfaceReader::getColourAt(int x, int y) { Uint32 pixel; Uint8 red, green, blue; RGBColour rgb; pixel = getPixel(m_surface, x, y); SDL_LockSurface(m_surface); SDL_GetRGB(pixel, m_surface->format, &red, &green, &blue); SDL_UnlockSurface(m_surface); rgb.r = red; rgb.b = blue; rgb.g = green; return rgb; } // this function taken from SDL documentation // http://www.libsdl.org/cgi/docwiki.cgi/Introduction_to_SDL_Video#getpixel Uint32 SDLSurfaceReader::getPixel(SDL_Surface* surface, int x, int y) { int bpp = m_surface->format->BytesPerPixel; Uint8 *p = (Uint8*)m_surface->pixels + y * m_surface->pitch + x * bpp; switch (bpp) { case 1: return *p; case 2: return *(Uint16*)p; case 3: if (SDL_BYTEORDER == SDL_BIG_ENDIAN) return p[0] << 16 | p[1] << 8 | p[2]; else return p[0] | p[1] << 8 | p[2] << 16; case 4: return *(Uint32*)p; default: return 0; } } I've been stumped at this, and I need help badly! Thanks so much for any advice.

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Android Canvas Coordinate System

    - by Mitch
    I'm trying to find information on how to change the coordinate system for the canvas. I have some vector data I'd like to draw to a canvas using things like circles and lines, but the data's coordinate system doesn't match the canvas coordinate system. Is there a way to map the units I'm using to the screen's units? I'm drawing to an ImageView which isn't taking up the entire display. If I have to do my own calculations prior to each drawing call, how to I find the width and height of my ImageView? The getWidth() and getHeight() calls I tried seem to be returning the entire canvas size and not the size of the ImageView which isn't helpful. I see some matrix stuff, is that something that will work for me? I tried to use the "public void scale(float sx, float sy)", but that works more like a pixel level zoom rather than a vector scale function by expanding each pixel. This means if the dimensions are increased to fit the screen, the line thickness is also increased. Update: After some research I'm starting to think there's no way to change coordinate systems to something else. I'll need to map all my coordinates to the screen's pixel coordinates and do so by modifying each vector. The getWidth() and getHeight() seem to be working better for me now. I can say what was wrong, but I suspect I can't use these methods inside the constructor.

    Read the article

  • File Format Conversion to TIFF. Some issues???

    - by Nains
    I'm having a proprietary image format SNG( a proprietary format) which is having a countinous array of Image data along with Image meta information in seperate HDR file. Now I need to convert this SNG format to a Standard TIFF 6.0 Format. So I studied the TIFF format i.e. about its Header, Image File Directories( IFD's) and Stripped Image Data. Now I have few concerns about this conversion. Please assist me. SNG Continous Data vs TIFF Stripped Data: Should I convert SNG Data to TIFF as a continous data in one Strip( data load/edit time problem?) OR make logical StripOffsets of the SNG Image data. SNG Data Header uses only necessary Meta Information, thus while converting the SNG to TIFF, some information can’t be retrieved such as NewSubFileType, Software Tag etc. So this raises a concern that after conversion whether any missing directory information such as NewSubFileType, Software Tag etc is necessary and sufficient condition for TIFF File. Encoding of each pixel component of RGB Sample in SNG data: Here each SNG Image Data Strip per Pixel component is encoded as: Out^[i] := round( LineBuffer^[i * 3] * **0.072169** + LineBuffer^[i * 3 + 1] * **0.715160** + LineBuffer^[i * 3+ 2]* **0.212671**); Only way I deduce from it is that each Pixel is represented with 3 RGB component and some coefficient is multiplied with each component to make the SNG Viewer work RGB color information of SNG Image Data. (Developer who earlier work on this left, now i am following the trace :)) Thus while converting this to TIFF, the decoding the same needs to be done. This raises a concern that the how RBG information in TIFF is produced, or better do we need this information?. Please assist...

    Read the article

  • Editing 8bpp indexed Bitmaps

    - by Pedro Sá
    hi, i'm trying to edit the pixels of a 8bpp. Since this PixelFormat is indexed i'm aware that it uses a Color Table to map the pixel values. Even though I can edit the bitmap by converting it to 24bpp, 8bpp editing is much faster (13ms vs 3ms). But, changing each value when accessing the 8bpp bitmap results in some random rgb colors even though the PixelFormat remains 8bpp. I'm currently developing in c# and the algorithm is as follows: (C#) 1- Load original Bitmap at 8bpp 2- Create Empty temp Bitmap with 8bpp with the same size as the original 3-LockBits of both bitmaps and, using P/Invoke, calling c++ method where I pass the Scan0 of each BitmapData object. (I used a c++ method as it offers better performance when iterating through the Bitmap's pixels) (C++) 4- Create a int[256] palette according to some parameters and edit the temp bitmap bytes by passing the original's pixel values through the palette. (C#) 5- UnlockBits. My question is how can I edit the pixel values without having the strange rgb colors, or even better, edit the 8bpp bitmap's Color Table? Regards, Pedro

    Read the article

  • Using Sendkeys in python to press {F12} results in other keys pressed?

    - by ThantiK
    import time from ctypes import * import win32gui import win32com.client as comclt X = 119 Y = 53 def PILColorToRGB(pil_color): """ convert a PIL-compatible integer into an (r, g, b) tuple """ hexstr = '%06x' % pil_color # reverse byte order r, g, b = hexstr[4:], hexstr[2:4], hexstr[:2] r, g, b = [int(n, 16) for n in (r, g, b)] return (r, g, b) wsh = comclt.Dispatch("WScript.Shell") w = win32gui user = windll.LoadLibrary("c:\\windows\\system32\\user32.dll") h = user.GetDC(0) gdi = windll.LoadLibrary("c:\\windows\\system32\\gdi32.dll") while True: FG = w.GetWindowText(w.GetForegroundWindow()) #FG = Foreground window title. if FG == "World of Warcraft": rgb = (PILColorToRGB(gdi.GetPixel(h,X,Y))) #X, Y time.sleep(0.333) #don't check too often. if (rgb[0] >= 130): #While Pixel (X, Y) is Red... #print "%d %d %d" % (rgb[0], rgb[1], rgb[2]) #Debug wsh.SendKeys("{F12}") #Send a key. time.sleep(0.7) #Add some extra down-time if we send the key. else: time.sleep(5) Basically all this code does is read a pixel on the screen, and send a key (F12) if the pixel is red. But when using this code I regularly get some phantom key-code being pressed. The application I'm using this on is obviously world of warcraft, and I have checked that all keybinds are standard keybinds. However randomly it seems I get either an up arrow, or a w pressed, which moves my character forward whenever this code executes (F12 is bound to a macro, unbound from any movement. If I press f12 with a hardware event, it does not exhibit this behavior. What in the world could be going on here?

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >