Search Results

Search found 5155 results on 207 pages for 'render to texture'.

Page 16/207 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • OpenGL: small black pixel on top right corner of texture

    - by user308226
    I wrote an uncompressed TGA texture loader and it works nearly perfect, except for the fact that there's just one TINY little black patch on the upper right and it's driving me mad. I can get rid of it by using a texture border, but somehow I think that's not the practical solution. Has anyone encountered this kind of problem before and knows -generally- what's going wrong when something like this happens, or should I post the image-loading function code? Here's a picture, the little black dot is REALLY small. http://img651.imageshack.us/img651/2230/dasdwx.png

    Read the article

  • Debugging a basic OpenGL texture fail? (iphone)

    - by Ben
    Hey all, I have a very basic texture map problem in GL on iPhone, and I'm wondering what strategies there are for debugging this kind of thing. (Frankly, just staring at state machine calls and wondering if any of them is wrong or misordered is no way to live-- are there tools for this?) I have a 512x512 PNG file that I'm loading up from disk (not specially packed), creating a CGBitmapContext, then calling CGContextDrawImage to get bytes out of it. (This code is essentially stolen from an Apple sample.) I'm trying to map the texture to a "quad", with code that looks essentially like this-- all flat 2D stuff, nothing fancy: glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glEnableClientState(GL_TEXTURE_COORD_ARRAY); GLfloat vertices[8] = { viewRect.origin.x, viewRect.size.height, viewRect.origin.x, viewRect.origin.y, viewRect.size.width, viewRect.origin.y, viewRect.size.width, viewRect.size.height }; GLfloat texCoords[8] = { 0, 1.0, 0, 0, 1.0, 0, 1.0, 1.0 }; glBindTexture(GL_TEXTURE_2D, myTextureRef); // This was previously bound to glVertexPointer(2, GL_FLOAT , 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisable(GL_TEXTURE_2D); My supposedly textured area comes out just black. I see no debug output from the CG calls to set up the texture. glGetError reports nothing. If I simplify this code block to just draw the verts, but set up a pure color, the quad area lights up exactly as expected. If I clear the whole context immediately beforehand to red, I don't see the red-- which means something is being rendered there, but not the contents of my PNG. What could I be doing wrong? And more importantly, what are the right tools and techniques for debugging this sort of thing, because running into this kind of problem and not being able to "step through it" in a debugger in any meaningful way is a bummer. Thanks!

    Read the article

  • Creating an OpenGL texture with alpha using NSBitmapImageRep

    - by BROK3N S0UL
    I am loading a PNG using: theImage = [NSBitmapImageRep imageRepWithContentsOfFile:imagePath]; from which I can successfully create a gl texture and render correctly without any transparency. However, when I switch blending on using: glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE); The texture renders with the correct transparent background, but the image colours are incorrect. I have tried several options in the blend function, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_DST_ALPHA, etc. I was taught maybe I need to reorder the bits in the image data, maybe the channels have been mixed up, but I would not expect it to render correctly when blending is off in that case. Alternatively, I could use libPNG I guess, but I would like to try using a NSBitmapImageRep if it is possible.

    Read the article

  • Certain web pages are suddenly not rendering properly in FireFox

    - by LeopardSkinPillBoxHat
    I am using FireFox 3.6.3. I noticed in the last couple of days that several webpages which I visit regularly are not rendering properly. A lot of the text is overlapping with other text and it basically looks like the style sheet is completely screwed up. I have tried disabling all of my Add-Ons and it doesn't make a difference. When I use Coral IE Tab to render the pages using IE they display without any problems. The websites which are not rending properly for me are: The Age Google Reader One interesting thing I noticed is that if I modify the Google Reader URL to not use SSL (i.e. change https to http) it renders without any issues. However, The Age website is not using SSL, and that still doesn't render properly. I have also disabled my Proxy Server (I normally use one at work) but this doesn't make a difference either.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • car race game collision condition.

    - by ashok patidar
    in this how can rotate car when it goes to collied with the track side. package { import flash.display.MovieClip; import flash.events.Event; import flash.events.KeyboardEvent; import flash.text.TextField; import flash.ui.Keyboard; import Math; /** * ... * @author Ashok */ public class F1race extends MovieClip { public var increment:Number = 0; //amount the car moves each frame public var posNeg:Number = 1; public var acceleration:Number = .05; //acceleration of the car, or the amount increment gets increased by. public var speed:Number = 0; //the speed of the car that will be displayed on screen public var maxSpeed:Number = 100; public var keyLeftPressed:Boolean; public var keyRightPressed:Boolean; public var keyUpPressed:Boolean; public var keyDownPressed:Boolean; public var spedometer:TextField = new TextField(); public var carRotation:Number ; public var txt_hit:TextField = new TextField(); public function F1race() { carRotation = carMC.rotation; trace(carMC.rotation); //addChild(spedometer); //spedometer.x = 0; //spedometer.y = 0; addChild(txt_hit); txt_hit.x = 0; txt_hit.y = 100; //rotation of the car addEventListener(Event.ENTER_FRAME, onEnterFrameFunction); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyPressed,false); stage.addEventListener(KeyboardEvent.KEY_UP, keyReleased,false); carMC.addEventListener(Event.ENTER_FRAME, carOver_road) } public function carOver_road(event:Event):void { //trace(texture.hitTestPoint(carMC.x,carMC.y,true),"--"); /* if(!texture.hitTestPoint(carMC.x,carMC.y,true)) { txt_hit.text = "WRONG WAY"; if(increment!=0) { increment=1; } } else { txt_hit.text = ""; //increment++; }*/ if (roadless.hitTestPoint(carMC.x - carMC.width / 2, carMC.y,true)) { trace("left Hit" + carMC.rotation); //acceleration = .005; //if(carMC.rotation>90 || carMC.rotation>90 //carMC.rotation += 2; if ((carMC.rotation >= 90) && (carMC.rotation <= 180)) { carMC.rotation += 3; carMC.x += 3; } if ((carMC.rotation <= -90) && (carMC.rotation >= -180)) { carMC.rotation += 3; texture.y -= 3; } if ((carMC.rotation > -90) && (carMC.rotation <= -1)) { carMC.rotation += 3; texture.y -= 3; } if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if (roadless.hitTestPoint(carMC.x + carMC.width / 2, carMC.y,true)) { trace("left right"); //carMC.rotation -= 2; if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if (roadless.hitTestPoint(carMC.x, carMC.y- carMC.height / 2,true)) { trace("left right"); //carMC.rotation -= 2; if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if (roadless.hitTestPoint(carMC.x, carMC.y+ carMC.height / 2,true)) { trace("left right"); //carMC.rotation -= 2; if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if ((!roadless.hitTestPoint(carMC.x - carMC.width / 2, carMC.y, true)) && (!roadless.hitTestPoint(carMC.x, carMC.y- carMC.height / 2,true)) && (!roadless.hitTestPoint(carMC.x, carMC.y+ carMC.height / 2,true)) && (!roadless.hitTestPoint(carMC.x, carMC.y+ carMC.height / 2,true))) { //acceleration = .05; } } public function onEnterFrameFunction(events:Event):void { speed = Math.round((increment) * 5); spedometer.text = String(speed); if ((carMC.rotation < 180)&&(carMC.rotation >= 0)){ carRotation = carMC.rotation; posNeg = 1; } if ((carMC.rotation < 0)&&(carMC.rotation > -180)){ carRotation = -1 * carMC.rotation; posNeg = -1; } if (keyRightPressed) { carMC.rotation += .5 * increment; carMC.LWheel.rotation = 8; carMC.RWheel.rotation = 8; steering.gotoAndStop(2); } if (keyLeftPressed) { carMC.rotation -= .5 * increment; carMC.LWheel.rotation = -8; carMC.RWheel.rotation = -8; steering.gotoAndStop(3); } if (keyDownPressed) { steering.gotoAndStop(1); carMC.LWheel.rotation = 0; carMC.RWheel.rotation = 0; increment -= 0.5 * acceleration; texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } increment -= 1 * acceleration; if ((Math.abs(speed)) < (Math.abs(maxSpeed))) { increment += acceleration; } if ((Math.abs(speed)) == (Math.abs(maxSpeed))) { trace("hello"); } } if (keyUpPressed) { steering.gotoAndStop(1); carMC.LWheel.rotation = 0; carMC.RWheel.rotation = 0; //trace(carMC.rotation); texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } increment += 1 * acceleration; if ((Math.abs(speed)) < (Math.abs(maxSpeed))) { increment += acceleration; } } if ((!keyUpPressed) && (!keyDownPressed)){ /*if (increment > 0 && (!keyUpPressed)&& (!keyDownPressed)) { //texture.y -= ((90-carRotation)/90)*increment; increment -= 1.5 * acceleration; } if((increment==0)&&(!keyUpPressed)&& (!keyDownPressed)) { increment = 0; } if((increment<0)&&(!keyUpPressed)&& (!keyDownPressed)) { increment += 1.5 * acceleration; }*/ if (increment > 0) { increment -= 1.5 * acceleration; texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } } if (increment == 0) { increment = 0; } if (increment < 0) { increment += 1.5 * acceleration; texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } } } } public function keyPressed(event:KeyboardEvent):void { trace("keyPressed"); if (event.keyCode == Keyboard.LEFT) { keyLeftPressed = true; } if (event.keyCode == Keyboard.RIGHT) { keyRightPressed = true; } if (event.keyCode == Keyboard.UP) { keyUpPressed = true; } if (event.keyCode == Keyboard.DOWN) { keyDownPressed = true; } } public function keyReleased(event:KeyboardEvent):void { trace("keyReleased..."); //increment -= 1.5 * acceleration; //increment--; if (event.keyCode == Keyboard.LEFT) { keyLeftPressed = false; } if (event.keyCode == Keyboard.RIGHT) { keyRightPressed = false; } if (event.keyCode == Keyboard.UP) { keyUpPressed = false; } if (event.keyCode == Keyboard.DOWN) { keyDownPressed = false; } } } }

    Read the article

  • Texture will not apply to my 3d Cube directX

    - by numerical25
    I am trying to apply a texture onto my 3d cube but it is not showing up correctly. I believe that it might some what be working because the cube is all brown which is almost the same complexion as the texture. And I did not originally make the cube brown. These are the steps I've done to add the texture I first declared 2 new varibles ID3D10EffectShaderResourceVariable* pTextureSR; ID3D10ShaderResourceView* textureSRV; I also added a variable and a struct to my shader .fx file Texture2D tex2D; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; I then grabbed the image from my local hard drive from within the .cpp file. I believe this was successful, I checked all varibles for errors, everything has a memory address. Plus I pulled resources before and never had a problem. D3DX10CreateShaderResourceViewFromFile(mpD3DDevice,L"crate.jpg",NULL,NULL,&textureSRV,NULL); I grabbed the tex2d varible from my fx file and placed into my resource varible pTextureSR = modelObject.pEffect->GetVariableByName("tex2D")->AsShaderResource(); And added the resource to the varible pTextureSR->SetResource(textureSRV); I also added the extra property to my vertex layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 24, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 36, D3D10_INPUT_PER_VERTEX_DATA, 0} }; as well as my struct struct VertexPos { D3DXVECTOR3 pos; D3DXVECTOR4 color; D3DXVECTOR3 normal; D3DXVECTOR2 texCoord; }; Then I created a new pixel shader that adds the texture to it. Below is the code in its entirety matrix Projection; matrix WorldMatrix; Texture2D tex2D; float3 lightSource; float4 lightColor = {0.5, 0.5, 0.5, 0.5}; // PS_INPUT - input variables to the pixel shader // This struct is created and fill in by the // vertex shader struct PS_INPUT { float4 Pos : SV_POSITION; float4 Color : COLOR0; float4 Normal : NORMAL; float2 Tex : TEXCOORD; }; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; //////////////////////////////////////////////// // Vertex Shader - Main Function /////////////////////////////////////////////// PS_INPUT VS(float4 Pos : POSITION, float4 Color : COLOR, float4 Normal : NORMAL, float2 Tex : TEXCOORD) { PS_INPUT psInput; // Pass through both the position and the color psInput.Pos = mul( Pos, Projection ); psInput.Normal = Normal; psInput.Tex = Tex; return psInput; } /////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////// float4 PS(PS_INPUT psInput) : SV_Target { float4 finalColor = 0; finalColor = saturate(dot(lightSource, psInput.Normal) * lightColor); return finalColor; } float4 textured( PS_INPUT psInput ) : SV_Target { return tex2D.Sample( linearSampler, psInput.Tex ); } // Define the technique technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, textured() ) ); } } Below is my CPU code. It maybe a little sloppy. But I am just adding code anywhere cause I am just experimenting and playing around. You should find most of the texture code at the bottom createObject #include "MyGame.h" #include "OneColorCube.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; ID3D10EffectMatrixVariable* pWorldMatrixVarible = NULL; ID3D10EffectVectorVariable* pLightVarible = NULL; ID3D10EffectShaderResourceVariable* pTextureSR; bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); //mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngleY = 15.0f; static float rotationAngleX = 0.0f; static D3DXMATRIX rotationXMatrix; static D3DXMATRIX rotationYMatrix; D3DXMatrixIdentity(&rotationXMatrix); D3DXMatrixIdentity(&rotationYMatrix); // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&rotationYMatrix, rotationAngleY); D3DXMatrixRotationX(&rotationXMatrix, rotationAngleX); rotationAngleY += (float)D3DX_PI * 0.0008f; rotationAngleX += (float)D3DX_PI * 0.0005f; WorldMatrix = rotationYMatrix * rotationXMatrix; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); pWorldMatrixVarible->SetMatrix((float*)&WorldMatrix); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //ViewMatrix._43 += 0.005f; // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->Draw(36,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 24, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 36, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); for(int i = 0; i < modelObject.numVertices; i += 3) { D3DXVECTOR3 out; D3DXVECTOR3 v1 = vertices[0 + i].pos; D3DXVECTOR3 v2 = vertices[1 + i].pos; D3DXVECTOR3 v3 = vertices[2 + i].pos; D3DXVECTOR3 u = v2 - v1; D3DXVECTOR3 v = v3 - v1; D3DXVec3Cross(&out, &u, &v); D3DXVec3Normalize(&out, &out); vertices[0 + i].normal = out; vertices[1 + i].normal = out; vertices[2 + i].normal = out; } //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; /* //Create indices DWORD indices[] = { 0,1,3, 1,2,3 }; ModelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * ModelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &ModelObject.pIndicesBuffer); if(FAILED(hr)) return false;*/ ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); pWorldMatrixVarible = modelObject.pEffect->GetVariableByName("WorldMatrix")->AsMatrix(); pTextureSR = modelObject.pEffect->GetVariableByName("tex2D")->AsShaderResource(); ID3D10ShaderResourceView* textureSRV; D3DX10CreateShaderResourceViewFromFile(mpD3DDevice,L"crate.jpg",NULL,NULL,&textureSRV,NULL); pLightVarible = modelObject.pEffect->GetVariableByName("lightSource")->AsVector(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; D3DXVECTOR3 vLight(1.0f, 1.0f, 1.0f); pLightVarible->SetFloatVector(vLight); modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; pTextureSR->SetResource(textureSRV); //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; } And here is my cube coordinates. I actually only added coordinates to one side. And that is the front side. To double check I flipped the cube in all directions just to make sure i didnt accidentally place the text on the incorrect side //Create vectors and put in vertices // Create vertex buffer VertexPos vertices[] = { // BACK SIDES { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 2 FRONT SIDE { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(2.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f) , D3DXVECTOR2(2.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(2.0,2.0)}, // 3 { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 4 { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 5 { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 6 {D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, };

    Read the article

  • Help building maya render node spec

    - by Ak
    Hi there, I'm looking to build 4x Maya render slaves/nodes for a friend of mine when his project gets green lit. The project involves MentalRay and lots of glass. I'm unsure if the new i7's 9xx or 8xx with hyper threading will do any better than a core 2 quad of the same (or close enough) speed. Does hyper threading make a difference to Maya or is it more performance per core based? I'm sure he's prefer I'd build another render node than pay for a bleeding edge CPU that only adds fractionly more GHz. -- The rest of the spec so far: 4Gb - 8Gb ram 64 bit OS: Probably Windows 7 (I know Linux is free, but want to build something my friend can support himself as easily as he supports his own workstation) 1TB HDD to hold textures, Maya files and renders which will be copied to central storage later Mobo with on-board video, gigabit NIC 500 - 650 watt PSU Desktop case something like a: Cooler Master ATCS 840 The machines will sold afterwards if necessary. -- If anyone has had experience in Maya and has done any tests with the new CPUs vs. the older ones I'd really appreciate your input.

    Read the article

  • Fastest possible way to render 480 x 320 background as iPhone OpenGL ES textures

    - by unknownthreat
    I need to display 480 x 320 background image in OpenGL ES. The thing is I experienced a bit of a slow down in iPhone when I use 512 x 512 texture size. So I am finding an optimum case for rendering iPhone resolution size background in OpenGL ES. How should I slice the background in this case to obtain the best possible performance? My main concern is speed. Should I go for 256 x 256 or other texture sizes here?

    Read the article

  • Drawing multiple triangles at once isn't working

    - by Deukalion
    I'm trying to draw multiple triangles at once to make up a "shape". I have a class that has an array of VertexPositionColor, an array of Indexes (rendered by this Triangulation class): http://www.xnawiki.com/index.php/Polygon_Triangulation So, my "shape" has multiple points of VertexPositionColor but I can't render each triangle in the shape to "fill" the shape. It only draws the first triangle. struct ShapeColor { // Properties (not all properties) VertexPositionColor[] Points; int[] Indexes; } First method that I've tried, this should work since I iterate through the index array that always are of "3s", so they always contain at least one triangle. //render = ShapeColor for (int i = 0; i < render.Indexes.Length; i += 3) { device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, new VertexPositionColor[] { render.Points[render.Indexes[i]], render.Points[render.Indexes[i+1]], render.Points[render.Indexes[i+2]] }, 0, 3, new int[] { 0, 1, 2 }, 0, 1 ); } or the method that should work: device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, render.Points, 0, render.Points.Length, render.Indexes, 0, render.Indexes.Length / 3, VertexPositionColor.VertexDeclaration ); No matter what method I use this is the "typical" result from my Editor (in Windows Forms with XNA) It should show a filled shape, because the indexes are right (I've checked a dozen of times) I simply click the screen (gets the world coordinates, adds a point from a color, when there are 3 points or more it should start filling out the shape, it only draws the lines (different method) and only 1 triangle). The Grid isn't rendered with "this" shape. Any ideas?

    Read the article

  • can't get texture to work

    - by user583713
    It been a while since I use android opengl but for what ever reason I get a white squre box and not the texture I what on the screen. Oh I do not think this would matter but just in case I put a linerlayout view first then the surfaceview on but anyway Here my code: public class GameEngine { private float vertices[]; private float textureUV[]; private int[] textureId = new int[1]; private FloatBuffer vertextBuffer; private FloatBuffer textureBuffer; private short indices[] = {0,1,2,2,1,3}; private ShortBuffer indexBuffer; private float x, y, z; private float rot, rotX, rotY, rotZ; public GameEngine() { } public void setEngine(float x, float y, float vertices[]){ this.x = x; this.y = y; this.vertices = vertices; ByteBuffer vbb = ByteBuffer.allocateDirect(this.vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertextBuffer = vbb.asFloatBuffer(); vertextBuffer.put(this.vertices); vertextBuffer.position(0); vertextBuffer.clear(); ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2); ibb.order(ByteOrder.nativeOrder()); indexBuffer = ibb.asShortBuffer(); indexBuffer.put(indices); indexBuffer.position(0); indexBuffer.clear(); } public void draw(GL10 gl){ gl.glLoadIdentity(); gl.glTranslatef(x, y, z); gl.glRotatef(rot, rotX, rotY, rotZ); gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertextBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void LoadTexture(float textureUV[], GL10 gl, InputStream is) throws IOException{ this.textureUV = textureUV; ByteBuffer tbb = ByteBuffer.allocateDirect(this.textureUV.length * 4); tbb.order(ByteOrder.nativeOrder()); textureBuffer = tbb.asFloatBuffer(); textureBuffer.put(this.textureUV); textureBuffer.position(0); textureBuffer.clear(); Bitmap bitmap = null; try{ bitmap = BitmapFactory.decodeStream(is); }finally{ try{ is.close(); is = null; gl.glGenTextures(1, textureId,0); gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //Clean up gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId[0]); bitmap.recycle(); }catch(IOException e){ } } } public void setVector(float x, float y, float z){ this.x = x; this.y = y; this.z = z; } public void setRot(float rot, float x, float y, float z){ this.rot = rot; this.rotX = x; this.rotY = y; this.rotZ = z; } public float getZ(){ return z; } }

    Read the article

  • How to create Executable Jar

    - by Siddharth
    When I try to create a jar file i found the following error, so please someone help me to out of this. Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load file: img/black_ring.png at com.badlogic.gdx.graphics.Pixmap.(Pixmap.java:137) at com.badlogic.gdx.graphics.glutils.FileTextureData.prepare(FileTextureData.java:55) at com.badlogic.gdx.graphics.Texture.load(Texture.java:175) at com.badlogic.gdx.graphics.Texture.create(Texture.java:159) at com.badlogic.gdx.graphics.Texture.(Texture.java:133) at com.badlogic.gdx.graphics.Texture.(Texture.java:122) at com.badlogic.runningball.UserBall.(UserBall.java:19) at com.badlogic.runningball.GameScreen.(GameScreen.java:25) at com.badlogic.runningball.RunningBall.create(RunningBall.java:12) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:126) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113) Caused by: com.badlogic.gdx.utils.GdxRuntimeException: File not found: img/black_ring.png (Internal) at com.badlogic.gdx.files.FileHandle.read(FileHandle.java:108) at com.badlogic.gdx.files.FileHandle.length(FileHandle.java:364) at com.badlogic.gdx.files.FileHandle.readBytes(FileHandle.java:156) at com.badlogic.gdx.graphics.Pixmap.(Pixmap.java:134) ... 10 more

    Read the article

  • Texturize a shape of multiple triangles in 2D

    - by Deukalion
    This is an example of a shape consisting of multiple points, triangles and eventually a shape: Red Dots = Vector3 (X, Y, Z) or Vector2 (X, Y) If I have a Texture of a certain size, how do I texturize this area in the best way so that the texture inside the shape matches the shape and does not overlap anywhere? Perhaps also with a chance to scale the texture in case it's too small or to big for the shape, but still so that it gets rendered correctly. Do I treat the shape as a rectangle? Figure out it's 4 corners? Or do I calculate the distance between Center - (Texture Width / 2) and Point (to see how "many" times the texture can fit between on that axis to estimate at what Coordinates the Texture should be at that certain point? I've looked at Texture Mapping but haven't found any concrete examples that it explains it well, it's also confusing with 0.0-1.0 values for Texture Coordinates.

    Read the article

  • Rails 3 - yield return or callback won't call in view <%= yield(:sidebar) || render('shared/sidebar'

    - by rzar
    Hey folks, I'm migrating a Website from Rails 2 (latest) to Rails 3 (beta2). Testing with Ruby 1.9.1p378 and Ruby 1.9.2dev (2010-04-05 trunk 27225) Stuck in a situation, i don't know which part will work well. Suspect yield is the problem, but don't know exactly. In my Layout Files I use the following technique quite often: app/views/layouts/application.html.erb: <%= yield(:sidebar) || render('shared/sidebar') %> For Example the partial look like: app/views/shared/_sidebar.html.erb: <p>Default sidebar Content. Bla Bla</p> Now it is time for the key part! In any view, I want to create a content_for block (optional). This can contain a pice of HTML etc. example below. If this block is set, the pice HTML inside should render in application.html.erb. If not, Rails should render the Partial at shared/_sidebar.html.erb on the right hand side. app/views/books/index.html.erb: <% content_for :sidebar do %> <strong>You have to read REWORK, a book from 37signals!</strong> <% end %> So you've got the idea. Hopefully. This technique worked well in any Rails 2.x Application. Now, in Rails 3 (beta2) only the yield Part is working. || render('shared/sidebar') The or side will not process by rails or maybe ruby. Thanks for input and time!

    Read the article

  • Django Forms - change the render multiple select widget

    - by John
    Hi, In my model I have a manytomany field mentors = models.ManyToManyField(MentorArea, verbose_name='Areas', blank=True) In my form I want to render this as: drop down box with list of all MentorArea objects which has not been associated with the object. Next to that an add button which will call a javascript function which will add it to the object. Then under that a ul list which has each selected MentorArea object with a x next to it which again calls a javascript function which will remove the MentorArea from the object. I know that to change how an field element is rendered you create a custom widget and override the render function and I have done that to create the add button. class AreaWidget(widgets.Select): def render(self, name, value, attrs=None, choices=()): jquery = u''' <input class="button def" type="button" value="Add" id="Add Area" />''' output = super(AreaWidget, self).render(name, value, attrs, choices) return output + mark_safe(jquery) However I don't know how to list the currently selected ones underneath as a list. Can anyone help me? Also what is the best way to filter down the list so that it only shows MentorArea objects which have not been added? I currently have the field as mentors = forms.ModelMultipleChoiceField(queryset=MentorArea.objects.all(), widget = AreaWidget, required=False) but this shows all mentors no matter if they have been added or not. Thanks

    Read the article

  • Render multiple control collections in ASP.NET custom control

    - by Monty
    I've build a custom WebControl, which has the following structure: <gws:ModalBox ID="ModalBox1" HeaderText="Title" runat="server"> <Contents>(controls...)</Contents> <Footer>(controls...)</Footer> </gws:ModalBox> The control contains two ControlCollection properties, 'Contents' and 'Footer'. Never tried to build a control with multiple control collections, but solved it like this (simplified): [PersistChildren(false), ParseChildren(true)] public class ModalBox : WebControl { private ControlCollection _contents; private ControlCollection _footer; public ModalBox() : base() { this._contents = base.CreateControlCollection(); this._footer = base.CreateControlCollection(); } [PersistenceMode(PersistenceMode.InnerProperty)] public ControlCollection Contents { get { return this._contents; } } [PersistenceMode(PersistenceMode.InnerProperty)] public ControlCollection Footer { get { return this._footer; } } protected override void RenderContents(HtmlTextWriter output) { // Render content controls. foreach (Control control in this.Contents) { control.RenderControl(output); } // Render footer controls. foreach (Control control in this.Footer) { control.RenderControl(output); } } } However it seems to render properly, it doesn't work anymore if I add some asp.net labels and input controls inside the property. I'll get the HttpException: Unable to find control with id 'KeywordTextBox' that is associated with the Label 'KeywordLabel'. Somewhat understandable, because the label appears before the textbox in the controlcollection. However, with default asp.net controls it does work, so why doesn't this work? What am I doing wrong? Is it even possible to have two control collections in one control? Should I render it differently? Thanks for replies.

    Read the article

  • Problem Render backbone collection using Mustache template

    - by RameshVel
    I am quite new to backbone js and Mustache. I am trying to load the backbone collection (Object array) on page load from rails json object to save the extra call . I am having troubles rendering the backbone collection using mustache template. My model & collection are var Item = Backbone.Model.extend({ }); App.Collections.Items= Backbone.Collection.extend({ model: Item, url: '/items' }); and view App.Views.Index = Backbone.View.extend({ el : '#itemList', initialize: function() { this.render(); }, render: function() { $(this.el).html(Mustache.to_html(JST.item_template(),this.collection )); //var test = {name:"test",price:100}; //$(this.el).html(Mustache.to_html(JST.item_template(),test )); } }); In the above view render, i can able to render the single model (commented test obeject), but not the collections. I am totally struck here, i have tried with both underscore templating & mustache but no luck. And this is the Template <li> <div> <div style="float: left; width: 70px"> <a href="#"> <img class="thumbnail" src="http://placehold.it/60x60" alt=""> </a> </div> <div style="float: right; width: 292px"> <h4> {{name}} <span class="price">Rs {{price}}</span></h4> </div> </div> </li> and my object array kind of looks like this

    Read the article

  • Invoking code both before and after WebControl.Render method

    - by Dirk
    I have a set of custom ASP.NET server controls, most of which derive from CompositeControl. I want to implement a uniform look for "required" fields across all control types by wrapping each control in a specific piece of HTML/CSS markup. For example: <div class="requiredInputContainer"> ...custom control markup... </div> I'd love to abstract this behavior in such a way as to avoid having to do something ugly like this in every custom control, present and future: public class MyServerControl : TextBox, IRequirableField { public IRequirableField.IsRequired {get;set;} protected override void Render(HtmlTextWriter writer){ RequiredFieldHelper.RenderBeginTag(this, writer) //render custom control markup RequiredFieldHelper.RenderEndTag(this, writer) } } public static class RequiredFieldHelper{ public static void RenderBeginTag(IRequirableField field, HtmlTextWriter writer){ //check field.IsRequired, render based on its values } public static void RenderEndTag(IRequirableField field, HtmlTextWriter writer){ //check field.IsRequired , render based on its values } } If I was deriving all of my custom controls from the same base class, I could conceivably use Template Method to enforce the before/after behavior;but I have several base classes and I'd rather not end up with really a convoluted class hierarchy anyway. It feels like I should be able to design something more elegant (i.e. adheres to DRY and OCP) by leveraging the functional aspects of C#, but I'm drawing a blank.

    Read the article

  • position for texture in opengl

    - by Tyzak
    hello, i have a question concerning how to declare the points for a texture on a cube to be exactly i mean the: glTexCoord2f(x.f, y.f); for the front side, my declaration works: glBegin(GL_POLYGON); //Vorderseite glNormal3f(0.0f, 0.0f, 1.0f);//normale für vorderseite glTexCoord2f(0.0f, -1.f); glVertex3f(-fSeitenL/2.0f,-fSeitenL/2.0f,+fSeitenL/2.0f); glTexCoord2f(1.f, -1.f); glVertex3f(+fSeitenL/2.0f,-fSeitenL/2.0f,+fSeitenL/2.0f); glTexCoord2f(1.f, 0.0f); glVertex3f(+fSeitenL/2.0f,+fSeitenL/2.0f,+fSeitenL/2.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-fSeitenL/2.0f,+fSeitenL/2.0f,+fSeitenL/2.0f); glEnd(); but for the right side, it doesn't work, i suggest i need other parameters, for glTexCoord2f, but i don't know witch one. glBegin(GL_POLYGON); //Rechte Seite glNormal3f(1.0f, 0.0f, 0.0f); //normale für rechte seite glTexCoord2f(0.0f, -1.f); glVertex3f(+fSeitenL/2.0f,-fSeitenL/2.0f,+fSeitenL/2.0f); glTexCoord2f(1.0f, -1.f); glVertex3f(+fSeitenL/2.0f,-fSeitenL/2.0f,-fSeitenL/0.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f(+fSeitenL/2.0f,+fSeitenL/2.0f,-fSeitenL/0.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(+fSeitenL/2.0f,+fSeitenL/2.0f,+fSeitenL/0.0f); glEnd(); after all i close the "texture-declaration with" glDisable(GL_TEXTURE_2D); thanks in advance

    Read the article

  • Generate texture from polygon (openGL)

    - by user146780
    I have a quad and I would like to use the gradient it produces as a texture for another polygon. glPushMatrix(); glTranslatef(250,250,0); glBegin(GL_POLYGON); glColor3f(255,0,0); glVertex2f(10,0); glVertex2f(100,0); glVertex2f(100,100); glVertex2f(50,50); glVertex2f(0,100); glEnd(); //End quadrilateral coordinates glPopMatrix(); glBegin(GL_QUADS); //Begin quadrilateral coordinates glVertex2f(0,0); glColor3f(0,255,0); glVertex2f(150,0); glVertex2f(150,150); glColor3f(255,0,0); glVertex2f(0,150); glEnd(); //End quadrilateral coordinates My goal is to make the 5 vertex polygon have the gradient of the quad (maybe a texture is not the best bet) Thanks

    Read the article

  • How Can I Compress Texture in OpenGL on iPhone/iPad?

    - by nonamelive
    Hi, I'm making an iPad app which needs OpenGL to do a flip animation. I have a front image texture and a back image texture. Both the two textures are screenshots. // Capture an image of the screen UIGraphicsBeginImageContext(view.bounds.size); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // Allocate some memory for the texture GLubyte *textureData = (GLubyte*)calloc(maxTextureSize*4, maxTextureSize); // Create a drawing context to draw image into texture memory CGContextRef textureContext = CGBitmapContextCreate(textureData, maxTextureSize, maxTextureSize, 8, maxTextureSize*4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0, maxTextureSize-size.height, size.width, size.height), image.CGImage); CGContextRelease(textureContext); // ...done creating the texture data [EAGLContext setCurrentContext:context]; glGenTextures(1, &textureToView); glBindTexture(GL_TEXTURE_2D, textureToView); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maxTextureSize, maxTextureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); // free texture data which is by now copied into the GL context free(textureData); Each of the texture takes up about 8MB memory, which is unacceptable for an iPhone/iPad app. Could anyone tell me how can I compress the texture to reduce the memory. I'm a complete newbie to OpenGL. Any help would be appreciated!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >