Search Results

Search found 29477 results on 1180 pages for 'complex script rendering'.

Page 29/1180 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Complex Event Processing and SQL in London next week

    - by simonsabin
    Don’t forget that we have the Stream Insight team coming to London and will be presenting at a SQL Social event on the 9th June. Stream Insight is one of the exciting new features in SQL Server 2008 R2. There are numerous uses of Stream Insight one being Algorithmic Trading an exciting topic in the banking sector. For details of what Stream Insight is go to the teams blog http://blogs.msdn.com/streaminsight/archive/2010/04/22/rtm.aspx and follow some of the links. For more details of the SQL Social...(read more)

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop

    - by Eric Z Goodnight
    Ever removed a background in Photoshop, only to find want to use parts of that background later? Layer Masks and Vector Masks are the elegant and often misunderstood answer to this common problem. Keep reading to see how they work. In this article, we’ll learn exactly what a Layer Mask is, and two methods to use them in practically any version of Photoshop, including a simpler example for less experienced Photoshop users, and another for more seasoned users who are comfortable with the Pen tool and vectors Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Outlook2Evernote Imports Notes from Outlook to Evernote Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Material, Pass, Technique and shaders

    - by Papi75
    I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture: class Material { void sendToShader() { program->sendUniform( nameInShader, valueInMaterialOrOther ); } private: Blend blendmode; ///< Alpha, Add, Multiply, … Color ambient; Color diffuse; Color specular; DrawingMode drawingMode; // Line Triangles, … Program* program; std::map<string, TexturePacket> textures; // List of textures with TexturePacket = { Texture*, vec2 offset, vec2 scale} }; How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks!

    Read the article

  • How to install .deb file from within preinst script

    - by Ashwin D
    I have my own application packaged using dpkg. The application depends on several deb files which I'm trying to install from within the preinst script of my application. The preinst script checks if a dependent deb file is installed, if not it goes to installt it using the dpkg -i command. This is repeated for all the dependent deb files needed by the main application. When I try to install the main application using dpkg -i, the commands returns failure when trying to execute the preinst script. Below is that error message. dpkg: error: dpkg status database is locked by another process I deleted /var/lib/dpkg/lock file and retried to install the application. But to no avail. If I run the preinst script separately like any other shell script, it runs without any issue. All the deb files will be installed properly. So, the issue is only when this preinst script is being run automatically by the dpkg -i command. I'm lost trying to determine the root cause. If anyone can shed some light on what the real issue might be, their help will be greatly appreciated. Thank you. Ashwin

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • How to mimic the same fixed-size horizon as in this racing game?

    - by Aybe
    I am trying to replicate the same horizon (buildings and sky) as in the image below: As you can see, the player has advanced in a straight line, yet the horizon has still the same size: This is my attempt using 3D, while it's okay when the player is on the start line: It's not so great when the player advanced as much as in the image no. 2: This is an overview of where the horizon buildings and sky are located: Obviously this won't achieve such effect when one is close to it, so I've tried to scale up the horizon on all axes but the problem is that the buildings are too small depending where you look at them from. How can one mimic such rendering ?

    Read the article

  • XNA 4.0 Point Vertex Rendering

    - by luis
    I have a buffer of about 134 million particles and a very powerful computer to render them smoothly but I am getting an error when trying to render them as primitive lines it says I cannot render more than around 1 million. I wonder how can I do this, also if is there a better way to render this other than with lines, I'm comfortable with having 1 pixel points or anything as long as the vertices are shown all the time. I'm basically just plotting the points. thanks.

    Read the article

  • Rendering My Fault Message

    - by onefloridacoder
    My coworkers setup  a nice way to get the fault messages from our service layer all the way back to the client’s service proxy layer.  This is what I needed to work on this afternoon.  Through a series of trials and errors I finally figured this out. The confusion was how I was looking at the exception in the quick watch viewer.  It appeared as though the EventArgs from the service call had somehow magically been cast back to FaultException(), not FaultException<T>.  I drilled into the EventArgs object with the quick watch and I copied this code to figure out where the Fault message was hiding.  Further, when I copied this quick watch code into the IDE I got squigglies.  Poop. 1: ((System 2: .ServiceModel 3: .FaultException<UnhandledExceptionFault>)(((System.Exception)(e.Result)).InnerException)).Detail.FaultMessage I wont bore you with the details but here’s how it turned out.   EventArgs which I’m calling “e” is not the result such as some collection of items you’re expecting.  It’s actually a FaultException, or in my case FaultException<T>.  Below is the calling code and the callback to handle the expected response or the fault from the completed event. 1: public void BeginRetrieveItems(Action<ObservableCollection<Model.Widget>> FindItemsCompleteCallback, Model.WidgetLocation location) 2: { 3: var proxy = new MyServiceContractClient(); 4:  5: proxy.RetrieveWidgetsCompleted += (s, e) => FindWidgetsCompleteCallback(FindWidgetsCompleted(e)); 6:  7: RetrieveWidgetsRequest request = new RetrieveWidgetsRequest { location.Id }; 8:  9: proxy.RetrieveWidgetsAsync(request); 10: } 11:  12: private ObservableCollection<Model.Widget> FindItemsCompleted(RetrieveWidgetsCompletedEventArgs e) 13: { 14: if (e.Error is FaultException<UnhandledExceptionFault>) 15: { 16: var fault = (FaultException<UnhandledExceptionFault>)e.Error; 17: var faultDetailMessage = fault.Detail.FaultMessage; 18:  19: UIMessageControlDuJour.Show(faultDetailMessage); 20: return new ObservableCollection<BinInventoryItemCountInfo>(); 21: } 22:  23: var widgets = new ObservableCollection<Model.Widget>(); 24:  25: if (e.Result.Widgets != null) 26: { 27: e.Result.Widgets.ToList().ForEach(w => widgets.Add(this.WidgetMapper.Map(w))); 28: } 29:  30: return widgets; 31: }

    Read the article

  • Simple rendering produces minor stutter

    - by Ben
    For some reason, this game loop renders the movement of a simple rectangle with no stuttering. double currTime; double prevTime = System.nanoTime() / NANO_TO_SEC; double FPSTIMER = System.nanoTime(); double maxTimeDiff = 100.0 / 1000.0; double delta = 1.0 / 60.0; int processes = 0, frames = 0; while(true){ currTime = System.nanoTime() / NANO_TO_SEC; if(currTime - prevTime > maxTimeDiff) prevTime = currTime; if(currTime >= prevTime){ process(); processes++; prevTime += delta; if(currTime < prevTime){ render(); frames++; } } else{ try{ Thread.sleep((long) (1000 * (prevTime - currTime))); } catch(Exception e){} } if(System.nanoTime() - FPSTIMER > 1000000000.0){ System.out.println("Process: " + (1000 / processes) + "ms FPS: " + (1000 / frames) + "ms"); processes = frames = 0; FPSTIMER += 1000000000.0; } } But for this game loop, I get really minor stuttering where the movement does not look smooth. long prevTime = System.currentTimeMillis(); long prevRenderTime = 0; long currRenderTime = 0; long delta = 0; long msPerTick = 1000 / 60; int frames = 0; int ticks = 0; double FPSTIMER = System.currentTimeMillis(); while (true){ long currTime = System.currentTimeMillis(); delta += (currTime - prevTime) / msPerTick; prevTime = currTime; while (delta >= 1){ ticks++; process(); delta -= 1; } prevRenderTime = System.currentTimeMillis(); render(); frames++; currRenderTime = System.currentTimeMillis(); try{ Thread.sleep((long) ((1000 / FPS) - (currRenderTime - prevRenderTime))); } catch(Exception e){} if(System.currentTimeMillis() - FPSTIMER > 1000.0){ System.out.println("Process: " + (1000.0 / ticks) + "ms FPS: " + (1000.0 / frames) + "ms"); ticks = frames = 0; FPSTIMER += 1000.0; } Is there any critical difference that I'm missing here? The one thing I noticed is that if I uncap the fps for the second game loop, the stuttering goes away. It doesn't make sense to me. Also, the second game loop came from Notch's Minicraft code with just my thread sleeping code added in.

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • Avoiding lag when rendering Texture2D for first time

    - by Emir Lima
    I have found a similar question here, but it is about playing sounds. I am using 2048 x 2048 textures for sprite sheets and every time I call spriteBatch.Draw using a sheet for the first time in game execution, causes a considerable lag. The lag doesn't appears for the next times. Someone has faced this problem before? What can I do to overcome this? Update: I inserted a code in the end of content load routine that draws EVERY Texture2D that is loaded into ContentManager before follow to the game screen. This works well. None lag occurs when different textures are rendered over the time, EXCEPT if the IsFullScreen are changed. Apparently, changing this property makes the textures loaded in the GPU gone. Is that correct?

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Rendering transparent textures in directX

    - by Vibhore Tanwer
    I am working with a directX application with WPF, I am facing a problem with videos and images that contains transparent pixels, I have to draw a color in background an then a video/image over it. What I expect is background color should be visible while playing video only non transparent pixels should be visible but what I get is a black background behind the video. I am using following settings on device to achieve alpha blending : device.RenderState.SourceBlend = Blend.SourceAlpha; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; device.RenderState.AlphaBlendEnable = true; What am I missing here? What is the best approach to handle transparent videos? Any help will be of great value to me.

    Read the article

  • XNA 2D Spritesheet drawing rendering problem

    - by user24092
    I'm making a tile-based game, using one spritesheet containing all tile graphics. Each tile has a size of 32x32 pixels. The main problem is: when I draw the tile to the screen, if the tile position x and y are not rounded or if scale is activated in spriteBatch.Draw() method (scale != 1.0f), I get some lines of adjacent tiles on the spritesheet into the current tile drawed. I already tried setting SamplerState to PointClamp, removing AntiAlias, but still doesn't work. Here I'll show images of some tests that I made, with a test sprite sheet that I've created (I made a 9x9 spritesheet, with each sprite of size 32x32 containing a unique solid color). Tests: http://img6.imageshack.us/img6/5946/testsqj.png SpriteSheet used: http://imageshack.us/a/img821/1341/tilesm.png Already tried to remove anti-alias, set PointClamp as sampler state, but still getting this issue, XNA keeps drawing part of the adjacent pixels of the texture on the screen. What I want is to get the correct area of the tilesheet texture (as seen in the first test, that gets just the yellow pixels). My question is: Is there any way that I can fix this, WITHOUT adding tile spacing or any other modification involving the tilesheet? Maybe disabling a texture filtering that is done by XNA, or something like that.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >