Search Results

Search found 15884 results on 636 pages for 'non photorealistic render'.

Page 2/636 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • highlighting non-ascii text

    - by non-techie
    As a non-techie, I would really appreciate your help on this. I have some files (html) that need to be in pure ascii form to be properly processed. Since these files are produced by humans, every so often non-ascii characters sneak in. Often it is a stray " (curled variety) or something similar that is difficult to find and need to be removed. I have found text editors (eg. textmate) that can strip out all non-ascii characters, but I need to find one that can highlight where they are, rather than remove them (as I need to remove them from the source and not the html file). I hope this makes sense and appreciate any assistance you can provide. Thanks!

    Read the article

  • how non-programmer become developer

    - by Sarang
    Every year there are different types of freshers getting recruited. But, our IT field is not only limited to IT Engineers & Computer Engineers. It is full of all different types of engineers. What is a way an engineer can be a proper developer ? I am asking this because, whatever engineering the student gone for, one can be shifted to IT development if he/she has some particular qualities within. What are those quelities required to be in a developer or required to be implemented to be developer ?

    Read the article

  • Rails render partial with block

    - by brad
    I'm trying to re-use an html component that i've written that provides panel styling. Something like: <div class="v-panel"> <div class="v-panel-tr"></div> <h3>Some Title</h3> <div class="v-panel-c"> .. content goes here </div> <div class="v-panel-b"><div class="v-panel-br"></div><div class="v-panel-bl"></div></div> </div> So I see that render takes a block. I figured then I could do something like this: # /shared/_panel.html.erb <div class="v-panel"> <div class="v-panel-tr"></div> <h3><%= title %></h3> <div class="v-panel-c"> <%= yield %> </div> <div class="v-panel-b"><div class="v-panel-br"></div><div class="v-panel-bl"></div></div> </div> And I want to do something like: #some html view <%= render :partial => '/shared/panel', :locals =>{:title => "Some Title"} do %> <p>Here is some content to be rendered inside the panel</p> <% end %> Unfortunately this doesn't work with this error: ActionView::TemplateError (/Users/bradrobertson/Repos/VeloUltralite/source/trunk/app/views/sessions/new.html.erb:1: , unexpected tRPAREN old_output_buffer = output_buffer;;@output_buffer = ''; __in_erb_template=true ; @output_buffer.concat(( render :partial => '/shared/panel', :locals => {:title => "Welcome"} do ).to_s) on line #1 of app/views/sessions/new.html.erb: 1: <%= render :partial => '/shared/panel', :locals => {:title => "Welcome"} do -%> ... So it doesn't like the = obviously with a block, but if I remove it, then it just doesn't output anything. Does anyone know how to do what I'm trying to achieve here? I'd like to re-use this panel html in many places on my site.

    Read the article

  • Rails 3 render and method call in another controller

    - by akam
    Hello, I am using rails 3: I would like to render a portion of view which is build by a 'notification' method in message class so I've add in my application.html.erb : <li><%= render :action => "notification", :controller => "messages" %></li> The goal of my file notification.html.erb is to display in a red circle the number of notifications in all my pages. I don't think I am in the good way, any ideas ? Thanks all :)

    Read the article

  • How to render a POST and make it show up on another page

    - by stack5914
    I'm trying to create a marketplace website similar to craigslist. I created a form according to the Django tutorial "Working with forms", but I don't know how to render information I got from the POST forms. I want to make information(subject,price...etc) that I got from POST show up on another page like this. http://bakersfield.craigslist.org/atq/3375938126.html and, I want the "Subject"(please look at form.py) of this product(eg.1960 French Chair) to show up on another page like this. http://bakersfield.craigslist.org/ata/ } Can I get some advice to handle submitted information? Here's present codes. I'll appreciate all your answers and helps. <-! Here's my codes -- ?forms.py from django import forms class SellForm(forms.Form): subject = forms.CharField(max_length=100) price = forms.CharField(max_length=100) condition = forms.CharField(max_length=100) email = forms.EmailField() body = forms.TextField() ?views.py from django.shortcuts import render, render_to_response from django.http import HttpResponseRedirect from site1.forms import SellForm def sell(request): if request.method =="POST": form =SellForm(request.POST) if form.is_valid(): subject = form.cleaned_data['subject'] price = form.cleaned_data['price'] condition = form.cleaned_data['condition'] email = form.cleaned_data['email'] body = form.cleaned_data['body'] return HttpResponseRedirect('/books/') else: form=SellForm() render(request, 'sell.html',{'form':form,}) ?urls.py from django.conf.urls import patterns, include, url from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^sechand/$','site1.views.sell'), url(r'^admin/', include(admin.site.urls)), ) ?sell.html <form action = "/sell/" method = "post">{% csrf_token%} {{ form.as_p }} <input type = "submit" value="Submit" /> </form>

    Read the article

  • help with rails render action vs routing

    - by Stacia
    I was using some image cropping example that I found online and now I got confused. There is actually no "crop" method in my controller. Instead (following the guide) I put a render :action => 'cropping', :layout=> "admin" In my create method. That renders a page the view called cropping.html.erb . It works fine but I have no idea how to link or render that page otherwise, like if I wanted to hit a URL directly or press a button to recrop an image. Should I actually create a crop method in my controller and hook it up via routing if I want to be able to do this, or is there a way within my view to link to the same place that renders the cropping action? Sorry about the confusion :) It doesn't help that the first version of the tutorial did have a cropping method and he removed it!! Any explanation on why one method is better over the other would be great. Thanks!!

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

  • Static and Non Static Method Intercall in Java

    - by Vishal
    I am clearing my concepts on Java. My knowledge about Java is on far begineer side, so kindly bear with me. I am trying to understand static method and non static method intercalls. I know -- Static method can call another static method simply by its name within same class. Static method can call another non staic method of same class only after creating instance of the class. Non static method can call another static method of same class simply by way of classname.methodname - No sure if this correct ? My Question is about non static method call to another non staic method of same class. In class declaration, when we declare all methods, can we call another non static method of same class from a non static class ? Please explain with example. Thank you.

    Read the article

  • Mustache.js render technique

    - by PanosJee
    Hello everyone, i am trying to use mustache.js to render some JSON in the browser. What i want to do is: <li> <span class="label">Location: </span> {{#locations}} {{.}}<span class="social-small-size "></span> {{/locations}} </li> The locations is a js array [["Pendéli, Attiki, Greece", "facebook"], ["Greece", "linkedin"]] Initially i tried to use {{%IMPLICIT-ITERATOR iterator=loc}} in my attempt to split the data in the view. So i the actual rendering code was {{loc[0]}}<span class="social-small-size {{loc[1}}"></span> But that did n t work altough the loop worked and i got 2 spans but without any content. I think the PRAGMA is what I need but I didn 't figure it out. Any hints ? :)

    Read the article

  • Render html code in sql server client report (rdlc)

    - by masoud ramezani
    I am using the asp.net web application and microsoft visual studio reportviewer control and rdlc for creating a report ( not using sql server reporting). I used the Product table to view the result. It has five fields and I display all the itemsin the report. One field is Description and it store the html code as the value(eg: <div><ul><li>a</li><li>b</li></ul><b>aaaa</b></div> ). I want to disply the output of this html code in my report's description field. But in my report, it shows the html value that I stored in my table (: <div><ul><li>a</li><li>b</li></ul><b>aaaa</b></div> ). How can I render the html in my report. Please give me a solution.

    Read the article

  • write to depth buffer while using multiple render targets

    - by DocSeuss
    Presently my engine is set up to use deferred shading. My pixel shader output struct is as follows: struct GBuffer { float4 Depth : DEPTH0; //depth render target float4 Normal : COLOR0; //normal render target float4 Diffuse : COLOR1; //diffuse render target float4 Specular : COLOR2; //specular render target }; This works fine for flat surfaces, but I'm trying to implement relief mapping which requires me to manually write to the depth buffer to get correct silhouettes. MSDN suggests doing what I'm already doing to output to my depth render target - however, this has no impact on z culling. I think it might be because XNA uses a different depth buffer for every RenderTarget2D. How can I address these depth buffers from the pixel shader?

    Read the article

  • Problem with JOGL and Framebuffer Render-to-texture: Invalid Framebuffer Operation Error

    - by quadelirus
    Okay, so I am trying to render a scene to a small 32x32 texture and ran into problems. I get an "invalid framebuffer operation" error when I try to actually draw anything to the texture. I have simplified the code below so that it simply tries to render a quad to a texture and then bind that quad as a texture for another quad that is rendered to the screen. So my question is this... where is the error? This is using JOGL 1.1.1. The error occurs at Checkpoint2 in the code. import java.awt.event.*; import javax.media.opengl.*; import javax.media.opengl.glu.*; import javax.swing.JFrame; import java.nio.*; public class Main extends JFrame implements GLEventListener, KeyListener, MouseListener, MouseMotionListener, ActionListener{ /* GL related variables */ private final GLCanvas canvas; private GL gl; private GLU glu; private int winW = 600, winH = 600; private int texRender_FBO; private int texRender_RB; private int texRender_32x32; public static void main(String args[]) { new Main(); } /* creates OpenGL window */ public Main() { super("Problem Child"); canvas = new GLCanvas(); canvas.addGLEventListener(this); canvas.addKeyListener(this); canvas.addMouseListener(this); canvas.addMouseMotionListener(this); getContentPane().add(canvas); setSize(winW, winH); setLocationRelativeTo(null); setDefaultCloseOperation(EXIT_ON_CLOSE); setVisible(true); canvas.requestFocus(); } /* gl display function */ public void display(GLAutoDrawable drawable) { gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, this.texRender_FBO); gl.glPushAttrib(GL.GL_VIEWPORT_BIT); gl.glViewport(0, 0, 32, 32); gl.glClearColor(1.f, 0.f, 0.f, 1.f); System.out.print("Checkpoint1: "); outputError(); gl.glBegin(GL.GL_QUADS); { //gl.glTexCoord2f(0.0f, 0.0f); gl.glColor3f(1.f, 0.f, 0.f); gl.glVertex3f(0.0f, 1.0f, 1.0f); //gl.glTexCoord2f(1.0f, 0.0f); gl.glColor3f(1.f, 1.f, 0.f); gl.glVertex3f(1.0f, 1.0f, 1.0f); //gl.glTexCoord2f(1.0f, 1.0f); gl.glColor3f(1.f, 1.f, 1.f); gl.glVertex3f(1.0f, 0.0f, 1.0f); //gl.glTexCoord2f(0.0f, 1.0f); gl.glColor3f(1.f, 0.f, 1.f); gl.glVertex3f(0.0f, 0.0f, 1.0f); } gl.glEnd(); System.out.print("Checkpoint2: "); outputError(); //Here I get an invalid framebuffer operation gl.glPopAttrib(); gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0); gl.glClearColor(0.f, 0.f, 0.f, 1.f); gl.glClear(GL.GL_COLOR_BUFFER_BIT); gl.glColor3f(1.f, 1.f, 1.f); gl.glBindTexture(GL.GL_TEXTURE_1D, this.texRender_32x32); gl.glBegin(GL.GL_QUADS); { gl.glTexCoord2f(0.0f, 0.0f); //gl.glColor3f(1.f, 0.f, 0.f); gl.glVertex3f(0.0f, 1.0f, 1.0f); gl.glTexCoord2f(1.0f, 0.0f); //gl.glColor3f(1.f, 1.f, 0.f); gl.glVertex3f(1.0f, 1.0f, 1.0f); gl.glTexCoord2f(1.0f, 1.0f); //gl.glColor3f(1.f, 1.f, 1.f); gl.glVertex3f(1.0f, 0.0f, 1.0f); gl.glTexCoord2f(0.0f, 1.0f); //gl.glColor3f(1.f, 0.f, 1.f); gl.glVertex3f(0.0f, 0.0f, 1.0f); } gl.glEnd(); } /* initialize GL */ public void init(GLAutoDrawable drawable) { gl = drawable.getGL(); glu = new GLU(); gl.glClearColor(.3f, .3f, .3f, 1f); gl.glClearDepth(1.0f); gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(0, 1, 0, 1, -10, 10); gl.glMatrixMode(GL.GL_MODELVIEW); //Set up the 32x32 texture this.texRender_FBO = genFBO(gl); gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, this.texRender_FBO); this.texRender_32x32 = genTexture(gl); gl.glBindTexture(GL.GL_TEXTURE_2D, this.texRender_32x32); gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB_FLOAT32_ATI, 32, 32, 0, GL.GL_RGB, GL.GL_FLOAT, null); gl.glFramebufferTexture2DEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_COLOR_ATTACHMENT0_EXT, GL.GL_TEXTURE_2D, this.texRender_32x32, 0); //gl.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0_EXT); this.texRender_RB = genRB(gl); gl.glBindRenderbufferEXT(GL.GL_RENDERBUFFER_EXT, this.texRender_RB); gl.glRenderbufferStorageEXT(GL.GL_RENDERBUFFER_EXT, GL.GL_DEPTH_COMPONENT24, 32, 32); gl.glFramebufferRenderbufferEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_DEPTH_ATTACHMENT_EXT, GL.GL_RENDERBUFFER_EXT, this.texRender_RB); gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0); gl.glBindRenderbufferEXT(GL.GL_RENDERBUFFER_EXT, 0); outputError(); } private void outputError() { int c; if ((c = gl.glGetError()) != GL.GL_NO_ERROR) System.out.println(glu.gluErrorString(c)); } private int genRB(GL gl) { int[] array = new int[1]; IntBuffer ib = IntBuffer.wrap(array); gl.glGenRenderbuffersEXT(1, ib); return ib.get(0); } private int genFBO(GL gl) { int[] array = new int[1]; IntBuffer ib = IntBuffer.wrap(array); gl.glGenFramebuffersEXT(1, ib); return ib.get(0); } private int genTexture(GL gl) { final int[] tmp = new int[1]; gl.glGenTextures(1, tmp, 0); return tmp[0]; } /* mouse and keyboard callback functions */ public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) { winW = width; winH = height; gl.glViewport(0, 0, width, height); } //Sorry about these, I just had to delete massive amounts of code to boil this thing down and these are hangers-on public void mousePressed(MouseEvent e) {} public void mouseDragged(MouseEvent e) {} public void mouseReleased(MouseEvent e) {} public void keyPressed(KeyEvent e) {} public void displayChanged(GLAutoDrawable drawable, boolean modeChanged, boolean deviceChanged) { } public void keyTyped(KeyEvent e) { } public void keyReleased(KeyEvent e) { } public void mouseMoved(MouseEvent e) { } public void actionPerformed(ActionEvent e) { } public void mouseClicked(MouseEvent e) { } public void mouseEntered(MouseEvent e) { } public void mouseExited(MouseEvent e) { } }

    Read the article

  • Behaviour of disabling "Allow non-administrators to receive notifications" GPO

    - by Jaymz
    Hi everyone, As the title suggests, I'm trying to figure out the specific behaviour of the following GPO when disabled: Administrative Templates Windows Components Allow non-administrators to receive update notifications We've just started using WSUS, and have added a few machines for testing. At the moment, this is set to Enabled. The problem with this setting is it seems to allow users to opt out of certain updates if they deselect the checkbox after hitting custom install. My main concern with disabling this setting is this: Does it stop non-admins from getting the installs deployed to them? My guess would be that it will just install them silently at the set scheduled time, suppressing any prompts and ensuring they don't get the opportunity to cancel them (this is what I want). My worry is that non-admin users will never get updates pushed to them unless an admin goes and logs on to their machine (not what I want, and seems like a silly situation to be in). Thanks in advance, Jaymz.

    Read the article

  • Render string to texture in Android and OpenGL ES

    - by Eddie Ringle
    I've googled around everywhere, but cannot find much for rendering strings to textures and then displaying that texture on a quad on the screen. Can someone provide a run-down on the process or provide good resources that describe how? Is rendering strings to textures even the best method for displaying text in an Android OpenGL ES app? EDIT: Okay, so LabelMaker interferes with alpha blending, the texture (created from a PNG with a transparent background) now has a solid black background, rather than a transparent background. If I comment out all the LabelMaker-related code, it works fine.

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • How do you share your craft with non programmers?

    - by EpsilonVector
    Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process, because that question was about managing client expectations.

    Read the article

  • How do you share your craft with non programmers?

    - by EpsilonVector
    Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process, because that question was about managing client expectations.

    Read the article

  • Best way to relate code smells to a non technical audience?

    - by Ed Guiness
    I have been asked to present examples of code issues that were found during a code review. My audience is mostly non-technical and I want to try to express the issues in such a way that I convey the importance of "good code" versus "bad code". But as I review my presentation it seems to me I've glossed over the reasons why it is important to write good code. I've mentioned a number of reasons including ease of maintenance, increased likelihood of bugs, but with my "non tech" hat on they seem unconvincing. What is your advice for helping a non-technical audience relate to the importance of good code?

    Read the article

  • How do you explain commented-out code to a non-programmer? [closed]

    - by whirlwin
    What is the quickest and most comprehensible way to explain to a non-programmer what commented-out code is? When I mentioned it in a conversation to non-programmers, they seemed lost. Such people could for instance be graphical designers, when working on the same team to make an application. Typically I would need to mention what I will be/currently am working with during an update meeting. At first I thought about substituting commented-out with unused code. While it is true to some degree, it is also very ambiguous. If you are wondering, I am working with legacy code with commented-out code. This leads to my question: "how do you explain commented-out code to a non-programmer?"

    Read the article

  • Where can I find a collection of photorealistic scenes? [on hold]

    - by emchristiansen
    Where can I find a collection of 10-100 3D scenes with photorealistic levels of detail? For example, the scene rendered here contains photorealistic detail. Note I want the underlying object models, textures, etc, not final renderings. I'll be using these scenes to test a computer vision idea. BTW, this is a reposting of this question. I wasn't able to find a Stack Exchange site that is perfectly suited to the question, but as this is a modeling and rendering question, I'm hoping it is a good enough match for the venue.

    Read the article

  • How to configure non-admin accounts to install updates of non-microsoft applications using Active Di

    - by MadBoy
    How to configure non-admin users to allow them to install updates for Java and Adobe Acrobat Reader (or any other application which may need such privileges) without needing for administrator password on Windows 7. Updates for Microsoft products install without problems. This can be Active Directory (Windows 2003) solution, or computer based (employable through GPO or login script).

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >