Search Results

Search found 1308 results on 53 pages for 'texture'.

Page 27/53 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Silverlight Cream for December 16, 2010 -- #1011

    - by Dave Campbell
    In this Issue: John Papa, Tim Heuer, Jeff Blankenburg(-2-, -3-), Jesse Liberty, Jay Kimble, Wei-Meng Lee, Paul Sheriff, Mike Snow(-2-, -3-), Samuel Jack, James Ashley, and Peter Kuhn. Above the Fold: Silverlight: "Animation Texture Creator" Peter Kuhn WP7: "dows Phone from Scratch #13 — Custom Behaviors Part II: ActionTrigger" Jesse Liberty Shoutouts: Awesome blog post by Jesse Liberty about writing in general: Ten Requirements For Tutorials, Videos, Demos and White Papers That Don’t Suck From SilverlightCream.com: 1000 Silverlight Cream Posts and Counting! John Papa has Silverlight TV number 55 up and it's an inverview he did with me the day before the Firestarter in December... thanks John... great job in making me not look stooopid :) Silverlight service release today - 4.0.51204 Tim Heuer announced a service release of Silverlight ... check out his blog for the updates and near the bottom is a link to the developer runtime. What I Learned In WP7 – Issue #3 Jeff Blankenburg has been pushing out tips ... number 3 consisted of 3 good pieces of info for WP7 devs including more info about fonts and a good site for free audio files What I Learned In WP7 – Issue #4 In number 4, Jeff Blankenburg talks about where to get some nice free WP7 icons, and a link to a cool article on getting all sorts of device info What I Learned In WP7 – Issue #5 Number 5 finds Jeff Blankenburg giving up the XAP for a CodeMash sessiondata app... or wait for it to appear in the Marketplace next week. Windows Phone from Scratch #13 — Custom Behaviors Part II: ActionTrigger Wow... Jesse Liberty is up to number 13 in his Windows Phone from scratch series... this time it's part 2 of his Custom Behaviors post, and ActionTriggers specifically. Solving the Storage Problem in WP7 (for CF Developers) Jay Kimble has released his WP7 dropbox client to the wild ... this is cool for loading files at run-time... opens up some ideas for me at least. Building Location Service Apps in Windows Phone 7 Wei-Meng Lee has a big informative post on location services in WP7... getting a Bing Maps API key, getting the data, navigating and manipulating the map, adding pushpins... good stuff Using Xml Files on Windows Phone Paul Sheriff is discussing XML files as a database for your WP7 apps via LINQ to XML. Sample code included. ABC–Win7 App Mike Snow has been busy with Tips of the Day ... he published a children's app for tracing their ABC's and discusses some of the code bits involved. Win7 Mobile Application Bar – AG_E_PARSER_BAD_PROPERTY_VALUE Mike Snow's next post is about the infamous AG_E_PARSER_BAD_PROPERTY_VALUE error or worse in WP7 ... how he got it, and how he fixed it... could save you some hair... Forward Navigation on the Windows Phone Mike Snow's latest post is about forward navigation on the WP7 ... oh wait... there isn't any... check out the post. Day 2 of my “3 days to Build a Windows Phone 7 Game” challenge Samuel Jack details about 9 hours in day 2 of his quest to build an XNA app for WP7 from a cold start. Windows Phone 7 Side Loading James Ashley has a really complete write-up on side-loading apps onto your WP7 device. Don't get excited... this isn't a hack... this is instructions for side-loading using the Microsoft-approved methos, which means a registered device. Animation Texture Creator Remember Peter Kuhn's post the other day about an Animation Texture Creator? ... well today he has some added tweaks and the source code! ... thanks Peter! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • LibGDX Box2D Body and Sprite AND DebugRenderer out of sync

    - by Free Lancer
    I am having a couple issues with Box2D bodies. I have a GameObject holding a Sprite and Body. I use a ShapeRenderer to draw an outline of the Body's and Sprite's bounding boxes. I also added a Box2DDebugRenderer to make sure everything's lining up properly. My problem is the Sprite and Body at first overlap perfectly, but as I turn the Body moves a bit off the sprite then comes back when the Car is facing either North or South. Here's an image of what I mean: (Not sure what that line is, first time to show up) BLUE is the Body, RED is the Sprite, PURPLE is the Box2DDebugRenderer. Also, you probably noticed a purple square in the top right corner. Well that's the Car drawn by the Box2D Debug Renderer. I thought it might be the camera but I've been playing with the Cameras for hours and nothing seems to work. All give me weird results. Here's my code: Screen: public void show() { // --------------------- SETUP ALL THE CAMERA STUFF ------------------------------ // battleStage = new Stage( 720, 480, false ); // Setup the camera. In Box2D we operate on a meter scale, pixels won't do it. So we use // an Orthographic camera with a Viewport of 24 meters in width and 16 meters in height. battleStage.setCamera( new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ) ); battleStage.getCamera().position.set( CAM_METER_WIDTH / 2, CAM_METER_HEIGHT / 2, 0 ); // The Box2D Debug Renderer will handle rendering all physics objects for debugging debugger = new Box2DDebugRenderer( true, true, true, true ); //debugCam = new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ); } public void render(float delta) { // Update the Physics World, use 1/45 for something around 45 Frames/Second for mobile devices physicsWorld.step( 1/45.0f, 8, 3 ); // 1/45 for devices // Set the Camera matrices and clear the screen Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); battleStage.getCamera().update(); // Draw game objects here battleStage.act(delta); battleStage.draw(); // Again update the Camera matrices and call the debug renderer debugCam.update(); debugger.render( physicsWorld, debugCam.combined); // Vehicle handles its own interaction with the HUD // update all Actors movements in the game Stage hudStage.act( delta ); // Draw each Actor onto the Scene at their new positions hudStage.draw(); } Car: (extends Actor) public Car( Texture texture, float posX, float posY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( WIDTH * Consts.PIXEL_METER_RATIO, HEIGHT * Consts.PIXEL_METER_RATIO ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); // set the origin to be at the center of the body mSprite.setPosition( posX * Consts.PIXEL_METER_RATIO, posY * Consts.PIXEL_METER_RATIO ); // place the car in the center of the game map FixtureDef carFixtureDef = new FixtureDef(); mBody = Physics.createBoxBody( BodyType.DynamicBody, carFixtureDef, mSprite ); } public void draw() { mSprite.setPosition( mBody.getPosition().x * Consts.PIXEL_METER_RATIO, mBody.getPosition().y * Consts.PIXEL_METER_RATIO ); mSprite.setRotation( MathUtils.radiansToDegrees * mBody.getAngle() ); // draw the sprite mSprite.draw( batch ); } Physics: (Create the Body) public static Body createBoxBody( final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; boxBodyDef.position.x = pSprite.getX() / Consts.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Consts.PIXEL_METER_RATIO; // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Consts.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Consts.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = BattleScreen.getPhysicsWorld().createBody(boxBodyDef); boxBody.createFixture(pFixtureDef); } Sorry for all the code and long description but it's hard to pin down what exactly might be causing the problem.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • DirectX particle system. ConstantBuffer

    - by Liuka
    I'm new in DirectX and I'm making a 2D game. I want to use a particle system to simulate a 3D starfield, so each star has to set its own constant buffer for the vertexshader es. to set it's world matrix. So if i have 500 stars (that move every frame) i need to call 500 times VSsetconstantbuffer, and map/unmap each buffer. with 500 stars i have an average of 220 fps and that's quite good. My bottelneck is Vs/PsSetconstantbuffer. If i dont call this function i have 400 fps(obliviously nothing is display, since i dont set the position of the stars). So is there a method to speed up the render of the particle system?? Ps. I'm using intel integrate graphic (hd 2000-3000). with a nvidia (or amd) gpu will i have the same bottleneck?? If, for example, i dont call setshaderresource i have 10-20 fps more (for 500 objcets), that is not 180.Why does SetConstantBuffer take so long?? LPVOID VSdataPtr = VSmappedResource.pData; memcpy(VSdataPtr, VSdata, CszVSdata); context->Unmap(VertexBuffer, 0); result = context->Map(PixelBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &PSmappedResource); if (FAILED(result)) { outputResult.OutputErrorMessage(TITLE, L"Cannot map the PixelBuffer", &result, OUTPUT_ERROR_FILE); return; } LPVOID PSdataPtr = PSmappedResource.pData; memcpy(PSdataPtr, PSdata, CszPSdata); context->Unmap(PixelBuffer, 0); context->VSSetConstantBuffers(0, 1, &VertexBuffer); context->PSSetConstantBuffers(0, 1, &PixelBuffer); this update and set the buffer. It's part of the render method of a sprite class that contains a the vertex buffer and the texture to apply to the quads(it's a 2d game) too. I have an array of 500 stars (sprite setup with a star texture). Every frame: clear back buffer; draw the array of stars; present the backbuffer; draw also call the function update( which calculate the position of the sprite on screen based on a "camera class") Ok, create a vertex buffer with the vertices of each quads(stars) seems to be good, since the stars don't change their "virtual" position; so.... In a particle system (where particles move) it's better to have all the object in only one vertices array, rather then an array of different sprite/object in order to update all the vertices' position with a single setbuffer call. In this case i have to use a dynamic vertex buffer with the vertices positions like this: verticesForQuad={{ XMFLOAT3((float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 0.0f) }, { XMFLOAT3((float)halfDImensions.x-1+pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 1.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1.pos.y, 1.0f), XMFLOAT2(0.0f, 0.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1.pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(0.0f, 1.0f) }, ....other quads} where halfDimensions is the halfsize in pixel of a texture and pos the virtual position of a star. than create an array of verticesForQuad and create the vertex buffer ZeroMemory(&vertexDesc, sizeof(vertexDesc)); vertexDesc.Usage = D3D11_USAGE_DEFAULT; vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexDesc.ByteWidth = sizeof(VertexType)* 4*numStars; ZeroMemory(&resourceData, sizeof(resourceData)); resourceData.pSysMem = verticesForQuad; result = device->CreateBuffer(&vertexDesc, &resourceData, &CvertexBuffer); and call each frame Context->IASetVertexBuffers(0, 1, &CvertexBuffer, &stride, &offset); But if i want to add and remove obj i have to recreate the buffer each time, havent i?? There is a faster way? I think i can create a vertex buffer with a max size (es. 10000 objs) and when i update it set only the 250 position (for 250 onjs for example) and pass this number as the vertexCount to the draw function (numObjs*4), or i'm worng

    Read the article

  • Using WPF and SlimDx (DirectX 10/11)

    - by slurmomatic
    I am using SlimDX with WinForms for a while now, but want to make the switch to WPF now. However, I can't figure out how to get DX10/11 working with WPF. The February release of SlimDX provides a WPF example, which only works with DX 9 though. I found the following solution: http://jmorrill.hjtcentral.com/Home/tabid/428/EntryId/437/Direct3D-10-11-Direct2D-in-WPF.aspx but can't get it to work with SlimDX. My main problem is the shared resource handle as I don't know how to retrieve the shared handle from a SlimDX texture. I can't find any information to this topic. In C++ the code looks like this: HRESULT D3DImageEx::GetSharedHandle(IUnknown *pUnknown, HANDLE * pHandle) { HRESULT hr = S_OK; *pHandle = NULL; IDXGIResource* pSurface; if (FAILED(hr = pUnknown->QueryInterface(__uuidof(IDXGIResource), (void**)&pSurface))) return hr; hr = pSurface->GetSharedHandle(pHandle); pSurface->Release(); return hr; } Basically, what I want to do (because I think that this is the solution), is to share a texture between a Direct3d9DeviceEx (for rendering the WPF D3DImage) and a Direct3d10Device (a texture render target for my scene). Any pointers in the right direction are greatly appreciated.

    Read the article

  • Cocoa equivalent of the Carbon method getPtrSize

    - by Michael Minerva
    I need to translate the a carbon method into cocoa into and I am having trouble finding any documentation about what the carbon method getPtrSize really does. From the code I am translating it seems that it returns the byte representation of an image but that doesn't really match up with the name. Could someone give me a good explanation of this method or link me to some documentation that describes it. The code I am translating is in a common lisp implementation called MCL that has a bridge to carbon (I am translating into CCL which is a common lisp implementation with a Cocoa bridge). Here is the MCL code (#_before a method call means that it is a carbon method): (defmethod COPY-CONTENT-INTO ((Source inflatable-icon) (Destination inflatable-icon)) ;; check for size compatibility to avoid disaster (unless (and (= (rows Source) (rows Destination)) (= (columns Source) (columns Destination)) (= (#_getPtrSize (image Source)) (#_getPtrSize (image Destination)))) (error "cannot copy content of source into destination inflatable icon: incompatible sizes")) ;; given that they are the same size only copy content (setf (is-upright Destination) (is-upright Source)) (setf (height Destination) (height Source)) (setf (dz Destination) (dz Source)) (setf (surfaces Destination) (surfaces Source)) (setf (distance Destination) (distance Source)) ;; arrays (noise-map Source) ;; accessor makes array if needed (noise-map Destination) ;; ;; accessor makes array if needed (dotimes (Row (rows Source)) (dotimes (Column (columns Source)) (setf (aref (noise-map Destination) Row Column) (aref (noise-map Source) Row Column)) (setf (aref (altitudes Destination) Row Column) (aref (altitudes Source) Row Column)))) (setf (connectors Destination) (mapcar #'copy-instance (connectors Source))) (setf (visible-alpha-threshold Destination) (visible-alpha-threshold Source)) ;; copy Image: slow byte copy (dotimes (I (#_getPtrSize (image Source))) (%put-byte (image Destination) (%get-byte (image Source) i) i)) ;; flat texture optimization: do not copy texture-id -> destination should get its own texture id from OpenGL (setf (is-flat Destination) (is-flat Source)) ;; do not compile flat textures: the display list overhead slows things down by about 2x (setf (auto-compile Destination) (not (is-flat Source))) ;; to make change visible we have to reset the compiled flag (setf (is-compiled Destination) nil))

    Read the article

  • Did I find a bug in WriteableBitmap when using string literals

    - by liserdarts
    For performance reasons I'm converting a large list of images into a single image. This code does exactly what I want. Private Function FlattenControl(Control As UIElement) As Image Control.Measure(New Size(1000, 1000)) Control.Arrange(New Rect(0, 0, 1000, 1000)) Dim ImgSource As New Imaging.WriteableBitmap(1000, 1000) ImgSource.Render(Control, New TranslateTransform) ImgSource.Invalidate Dim Img As New Image Img.Source = ImgSource Return Img End Function I can add all the images into a canvas pass the canvas to this function and I get back one image. My code to load all the images looks like this. Public Function BuildTextures(Layer As GLEED2D.Layer) As FrameworkElement Dim Container As New Canvas For Each Item In Layer.Items If TypeOf Item Is GLEED2D.TextureItem Then Dim Texture = CType(Item, GLEED2D.TextureItem) Dim Url As New Uri(Texture.texture_filename, UriKind.Relative) Dim Img As New Image Img.Source = New Imaging.BitmapImage(Url) Container.Children.Add(Img) End If Next Return FlattenControl(Container) End Function The GLEED2D.Layer and GLEED2D.TextureItem classes are from the free level editor GLEED2D (http://www.gleed2d.de/). The texture_filename on every TextureItem is "Images/tree_clipart_pine_tree.png" This works just fine, but it's just a proof of concept. What I really need to do (among other things) is have the path to the image hard coded. If I replace Texture.texture_filename in the code above with the string literal "Images/tree_clipart_pine_tree.png" the images do not appear in the final merged image. I can add a breakpoint and see that the WriteableBitmap has all of it's pixels as 0 after the call to Invalidate. I have no idea how this could cause any sort of difference, but it gets stranger. If I remove the call to FlattenControl and just return the Canvas instead, the images are visible. It's only when I use the string literal with the WriableBitmap that the images do not appear. I promise you that the value in the texture_filename property is exactly "Images/tree_clipart_pine_tree.png". I'm using Silverlight 3 and I've also reproduced this in Silverlight 4. Any ideas?

    Read the article

  • Draw Rectangle with XNA

    - by mazzzzz
    Hey guys, I was working on game, and wanted to highlight a spot on the screen when something happens, I created a class to do this for me, and found a bit of code to draw the rectangle static private Texture2D CreateRectangle(int width, int height, Color colori) { Texture2D rectangleTexture = new Texture2D(game.GraphicsDevice, width, height, 1, TextureUsage.None, SurfaceFormat.Color);// create the rectangle texture, ,but it will have no color! lets fix that Color[] color = new Color[width * height];//set the color to the amount of pixels in the textures for (int i = 0; i < color.Length; i++)//loop through all the colors setting them to whatever values we want { color[i] = colori; } rectangleTexture.SetData(color);//set the color data on the texture return rectangleTexture;//return the texture } Problem is that the code above is called every update, (60 times a second), and it was not written with optimization in mind, I have no clue how else to write a code to do this though. It needs to be extremely fast (the code above freezes the game, which has only skeleton code right now).. Any suggestions. Note: Any new code would be great (WireFrame/Fill are both fine). I would like to be able to specify color. Something to point in the right direction would be great, Thanks, Max

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

  • Textures in Opengl ES 2 not working properly

    - by Adl
    Hi! I'm working with Opengl ES 2 on iphone and right now I am trying to get my textures working on my objects. I'm using .obj files and all the data in them are correct. I have written a parser myself to retrieve all data, I convert it to static arrays in C. I discard the material properties for now, only getting the image path from the .mtl files manually. I have an object with 336 triangles, making this non-trivial to observe, with appertaining vertices, vertex faces and texture coordinates (u,v). Passing all data into the shaders, the resulting image is this: http://img530.imageshack.us/img530/9637/pic1io.png http://img404.imageshack.us/img404/7358/pic2pg.png But it should look like this (Displaying it in an object viewer). Please ignore the material properties. http://img16.imageshack.us/img16/1401/pic3cq.png Using this image as a texture: http://img217.imageshack.us/img217/1300/shirtdiffuse.png I'm thinking it might have to do with texture coordinate faces ? It is defined in my .obj file, and I'm not using them at all. In books and tutorials I have not found anything concerning this. Regards Niclas

    Read the article

  • displaying a saved buffer in OpenGL ES

    - by Adam
    Hi everyone, So basically I have a screenshot that I've saved that I want to later display on the screen. I've saved it with: glReadPixels(0, 0, self.bounds.size.width, self.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); And later I'm trying to write it to the screen with: GLuint RenderedTex; glGenTextures(1, &RenderedTex); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, RenderedTex); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, self.bounds.size.width, self.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); glDisable(GL_TEXTURE_2D); I'm pretty new to OpenGL so I don't know if I'm doing things right... actually I know I'm not, because nothing shows up. Also not sure how to dispose of the texture when I'm done with it. Anyone know the correct way to do this? Edit: I think I might be having a problem loading the texture because it's not a power of 2, it's 320x480... also, I think this code is just loading the texture, but not drawing it, I'd need a call to glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) in there somewhere...

    Read the article

  • Opengl: Problem with masking

    - by Shaza
    Hey all, I'm working on creating a hole in a wall using masking in opengl, my code is quit simple like this, //Draw the mask glEnable(GL_BLEND); glBlendFunc(GL_DST_COLOR,GL_ZERO); glBindTexture(GL_TEXTURE_2D, texture[3]); glBegin(GL_QUADS); glTexCoord2d(0,0); glVertex3f(-20,40,-20); glTexCoord2d(0,1);glVertex3f(-20,40,40); glTexCoord2d(1,1);glVertex3f(20,40,40); glTexCoord2d(1,0);glVertex3f(20,40,-20); glEnd(); //Draw the Texture glBlendFunc(GL_ONE, GL_ONE); glBindTexture(GL_TEXTURE_2D, texture[2]); glBegin(GL_QUADS); glTexCoord2d(0,0); glVertex3f(-20,40,-20); glTexCoord2d(0,1);glVertex3f(-20,40,40); glTexCoord2d(1,1);glVertex3f(20,40,40); glTexCoord2d(1,0);glVertex3f(20,40,-20); glEnd(); The problem is, I got the hole in the wall correctly but it's semi transparent, I'm getting like black shade over it, also I can see through it. Here's a photo for what I'm getting: any suggestions?

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

  • Simple gradient issue with OpenGL on iphone simulator

    - by Paul
    i was following the tutorial from raywenderlich website, the gradient seems not to work perfectly, is it only because of the iphone simulator, or is it something else? I can't try myself with an iphone. Here is the image : And the code : -(CCSprite *)spriteWithColor:(ccColor4F)bgColor textureSize:(float)textureSize { // 1: Create new CCRenderTexture CCRenderTexture *rt = [CCRenderTexture renderTextureWithWidth:textureSize height:screenSize.height]; // 2: Call CCRenderTexture:begin [rt beginWithClear:bgColor.r g:bgColor.g b:bgColor.b a:bgColor.a]; // 3: Draw into the texture glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); float gradientAlpha = 0.5; CGPoint vertices[4]; ccColor4F colors[4]; int nVertices = 0; vertices[nVertices] = CGPointMake(0, 0); colors[nVertices++] = (ccColor4F){0, 0, 0, 0}; vertices[nVertices] = CGPointMake(textureSize, 0); colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha}; vertices[nVertices] = CGPointMake(0, screenSize.height); colors[nVertices++] = (ccColor4F){0, 0, 0, 0}; vertices[nVertices] = CGPointMake(textureSize, screenSize.height); colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha}; glVertexPointer(2, GL_FLOAT, 0, vertices); glColorPointer(4, GL_FLOAT, 0, colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)nVertices); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); // 4: Call CCRenderTexture:end [rt end]; // 5: Create a new Sprite from the texture return [CCSprite spriteWithTexture:rt.sprite.texture]; } Thanks

    Read the article

  • PNG alpha rendered as black in Blender

    - by Camilo Martin
    I'm a Blender novice, so this is probably easy to fix. When I use a transparent PNG as a texture in Blender, the parts that should be transparent are rendered as black. This is especially confusing since in the material preview it looks as if the material would indeed be transparent. Here's a screenshot: This is the test texture, and in the right on top of a checkerboard:                        Here is the .blend file in case you want to check it:                                                      

    Read the article

  • Flex, Papervision3d: basic scenes, cameras, planes issue

    - by user157880
    // Import Papervision3D import org.papervision3d.Papervision3D; import org.papervision3d.scenes.; import org.papervision3d.cameras.; import org.papervision3d.objects.; import org.papervision3d.materials.; public var scene:Scene3D; public var camera:Camera3D; public var target:DisplayObject3D; public var screenshotArray: Array;//array to store pictures public var radius:Number; public var image:Image; private function initPapervision():void { target= new DisplayObject3D(); camera= new Camera3D(target); 1067: Implicit coercion of a value of type org.papervision3d.objects:DisplayObject3D to an unrelated type Number. scene= new Scene3D(thisContainer); 1137: Incorrect number of arguments. Expected no more than 0. scene.addChild( new DisplayObject3D() , "center" ); } public function handleCreationComplete():void { image = new Image(); image.source = "one_t.jpg"; image.maintainAspectRatio = true; image.scaleContent = false; image.autoLoad = false; image.addEventListener(Event.COMPLETE, handleImageLoad); image.load(); initPapervision(); } public function handleImageLoad(event:Event):void { init3D(); // onEnterFrame addEventListener( Event.ENTER_FRAME, loop3D ); addEventListener( MouseEvent.MOUSE_MOVE , handleMouseMove); } public function init3D():void { addScreenshots(20); } public function handleMouseMove(event:MouseEvent):void { camera.y = Math.max( thisContainer.mouseY*5 , -450 ); } public function addScreenshots(planeCount:Number):void { var obj:DisplayObject3D = scene.getChildByName ("center"); screenshotArray = new Array(planeCount); var texture:BitmapData = new BitmapData(image.content.loaderInfo.width, image.content.loaderInfo.height, false, 0); texture.draw(image); // Create texture with a bitmap from the library var materialSpace :MaterialObject3D = new BitmapMaterial(texture); materialSpace.doubleSided = true; materialSpace.smooth = false; radius = 500; camera.z = radius + 50; camera.y = 50; for (var i:Number = 0; i < planeCount; i++ ) { screenshotArray[i] = new Object() <b>screenshotArray[i].plane = new Plane( materialSpace, 100, 100, 2, 2 );</b> 1180: Call to a possibly undefined method Plane. // Position plane var rotation:Number = (360/planeCount)* i ; var rotationRadians:Number = (rotation-90) * (Math.PI/180); screenshotArray[i].rotation = rotation; screenshotArray[i].plane.z = (radius * Math.cos(rotationRadians) ) * -1; screenshotArray[i].plane.x = radius * Math.sin(rotationRadians) * -1; screenshotArray[i].plane.y = 100; screenshotArray[i].plane.lookAt(obj); // Add to scene scene.addChild( screenshotArray[i].plane ); } <b>scene.renderCamera(camera);</b> <i>1061: Call to a possibly undefined method renderCamera through a reference with static type org.papervision3d.scenes:Scene3D.</i> } public function loop3D(event:Event):void { var obj:DisplayObject3D = scene.getChildByName ("center"); for (var i:Number = 0; i < screenshotArray.length ; i++ ) { var rotation:Number = screenshotArray[i].rotation; rotation += (thisContainer.mouseX/50)/(screenshotArray.length/10); var rotationRadians:Number = (rotation-90) * (Math.PI/180); screenshotArray[i].rotation = rotation; screenshotArray[i].plane.z = (radius * Math.cos(rotationRadians) ) * -1; screenshotArray[i].plane.x = radius * Math.sin(rotationRadians) * -1; screenshotArray[i].plane.lookAt(obj); } //now lets render the scene <b>scene.renderCamera(camera);</b> <i>1061: Call to a possibly undefined method renderCamera through a reference with static type org.papervision3d.scenes:Scene3D.</i> } ]]

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • Manually writing a dx11 tessellation shader

    - by Tudor
    I am looking for resources on what are the steps of manually implementing tessellation (I'm using Unity cg). Today it seems that it is all the rage to hide most of the gpu code far away and use rather rigid simplifications such as unity's SURFace shaders. And it seems useless unless you're doing supeficial stuff. A little background: I have procedurally generated meshes (using marching cubes) which have quality normals but no UVs and no Tangents. I have successfully written a custom vertex and fragment shader to do triplanar texture and bumpmap projection as well as some custom stuff (custom lighting, procedurally warping the texture for variation etc). I am using the GPU Gems book as reference. Now I need to implement tessellation, but It seems I must calculate the tangents at runtime by swizzling normals (ctrl+f this in gems: <normal.z, normal.y, -normal.x>) before the tessellator gets them. And I also need to keep my custom vert+frag setup (with my custom parameters/textures being passed between them) - so apparently I cannot use surface shaders. Can anyone provide some guidence?

    Read the article

  • Ingredient Substitutes while Baking

    - by Rekha
    In our normal cooking, we substitute the vegetables for the gravies we prepare. When we start baking, we look for a good recipe. At least one or two ingredient will be missing. We do not know where to substitute what to bring same output. So we finally drop the plan of baking. Again after a month, we get the interest in baking. Again one or two lack of ingredient and that’s it. We keep on doing this for months. When I was going through the cooking blogs, I came across a site with the Ingredient Substitutes for Baking: (*) is to indicate that this substitution is ideal from personal experience. Flour Substitutes ( For 1 cup of Flour) All Purpose Flour 1/2 cup white cake flour plus 1/2 cup whole wheat flour 1 cup self-rising flour (omit using salt and baking powder if the recipe calls for it since self raising flour has it already) 1 cup plus 2 tablespoons cake flour 1/2 cup (75 grams) whole wheat flour 7/8 cup (130 grams) rice flour (starch) (do not replace all of the flour with the rice flour) 7/8 cup whole wheat Bread Flour 1 cup all purpose flour 1 cup all purpose flour plus 1 teaspoon wheat gluten (*) Cake Flour Place 2 tbsp cornstarch in 1 cup and fill the rest up with All Purpose flour (*) 1 cup all purpose flour minus 2 tablespoons Pastry flour Place 2 tbsp cornstarch in 1 cup and fill the rest up with All Purpose flour Equal parts of All purpose flour plus cake flour (*) Self-rising Flour 1½ teaspoons of baking powder plus ½ teaspoon of salt plus 1 cup of all-purpose flour. Cornstarch (1 tbsp) 2 tablespoons all-purpose flour 1 tablespoon arrowroot 4 teaspoons quick-cooking tapioca 1 tablespoon potato starch or rice starch or flour Tapioca (1 tbsp) 1 – 1/2 tablespoons all-purpose flour Cornmeal (stone ground) polenta OR corn flour (gives baked goods a lighter texture) if using cornmeal for breading,crush corn chips in a blender until they have the consistency of cornmeal. maize meal Corn grits Sweeteners ( for Every 1 cup ) * * (HV) denotes Healthy Version for low fat or fat free substitution in Baking Light Brown Sugar 2 tablespoons molasses plus 1 cup of white sugar Dark Brown Sugar 3 tablespoons molasses plus 1 cup of white sugar Confectioner’s/Powdered Sugar Process 1 cup sugar plus 1 tablespoon cornstarch Corn Syrup 1 cup sugar plus 1/4 cup water 1 cup Golden Syrup 1 cup honey (may be little sweeter) 1 cup molasses Golden Syrup Combine two parts light corn syrup plus one part molasses 1/2 cup honey plus 1/2 cup corn syrup 1 cup maple syrup 1 cup corn syrup Honey 1- 1/4 cups sugar plus 1/4 cup water 3/4 cup maple syrup plus 1/2 cup granulated sugar 3/4 cup corn syrup plus 1/2 cup granulated sugar 3/4 cup light molasses plus 1/2 cup granulated white sugar 1 1/4 cups granulated white or brown sugar plus 1/4 cup additional liquid in recipe plus 1/2 teaspoon cream of tartar Maple Syrup 1 cup honey,thinned with water or fruit juice like apple 3/4 cup corn syrup plus 1/4 cup butter 1 cup Brown Rice Syrup 1 cup Brown sugar (in case of cereals) 1 cup light molasses (on pancakes, cereals etc) 1 cup granulated sugar for every 3/4 cup of maple syrup and increase liquid in the recipe by 3 tbsp for every cup of sugar.If baking soda is used, decrease the amount by 1/4 teaspoon per cup of sugar substituted, since sugar is less acidic than maple syrup Molasses 1 cup honey 1 cup dark corn syrup 1 cup maple syrup 3/4 cup brown sugar warmed and dissolved in 1/4 cup of liquid ( use this if taste of molasses is important in the baked good) Cocoa Powder (Natural, Unsweetened) 3 tablespoons (20 grams) Dutch-processed cocoa plus 1/8 teaspoon cream of tartar, lemon juice or white vinegar 1 ounce (30 grams) unsweetened chocolate (reduce fat in recipe by 1 tablespoon) 3 tablespoons (20 grams) carob powder Semisweet baking chocolate (1 oz) 1 oz unsweetened baking chocolate plus 1 Tbsp sugar Unsweetened baking chocolate (1 oz ) 3 Tbsp baking cocoa plus 1 Tbsp vegetable oil or melted shortening or margarine Semisweet chocolate chips (1 cup) 6 oz semisweet baking chocolate, chopped (Alternatively) For 1 cup of Semi sweet chocolate chips you can use : 6 tablespoons unsweetened cocoa powder, 7 tablespoons sugar ,1/4 cup fat (butter or oil) Leaveners and Diary * * (HV) denotes Healthy Version for low fat or fat free substitution in Baking Compressed Yeast (1 cake) 1 envelope or 2 teaspoons active dry yeast 1 packet (1/4 ounce) Active Dry yeast 1 cake fresh compressed yeast 1 tablespoon fast-rising active yeast Baking Powder (1 tsp) 1/3 teaspoon baking soda plus 1/2 teaspoon cream of tartar 1/2 teaspoon baking soda plus 1/2 cup buttermilk or plain yogurt 1/4 teaspoon baking soda plus 1/3 cup molasses. When using the substitutions that include liquid, reduce other liquid in recipe accordingly Baking Soda(1 tsp) 3 tsp Baking Powder ( and reduce the acidic ingredients in the recipe. Ex Instead of buttermilk add milk) 1 tsp potassium bicarbonate Ideal substitution – 2 tsp Baking powder and omit salt in recipe Cream of tartar (1 tsp) 1 teaspoon white vinegar 1 tsp lemon juice Notes from What’s Cooking America – If cream of tartar is used along with baking soda in a cake or cookie recipe, omit both and use baking powder instead. If it calls for baking soda and cream of tarter, just use baking powder.Normally, when cream of tartar is used in a cookie, it is used together with baking soda. The two of them combined work like double-acting baking powder. When substituting for cream of tartar, you must also substitute for the baking soda. If your recipe calls for baking soda and cream of tarter, just use baking powder. One teaspoon baking powder is equivalent to 1/4 teaspoon baking soda plus 5/8 teaspoon cream of tartar. If there is additional baking soda that does not fit into the equation, simply add it to the batter. Buttermilk (1 cup) 1 tablespoon lemon juice or vinegar (white or cider) plus enough milk to make 1 cup (let stand 5-10 minutes) 1 cup plain or low fat yogurt 1 cup sour cream 1 cup water plus 1/4 cup buttermilk powder 1 cup milk plus 1 1/2 – 1 3/4 teaspoons cream of tartar Plain Yogurt (1 cup) 1 cup sour cream 1 cup buttermilk 1 cup crème fraiche 1 cup heavy whipping cream (35% butterfat) plus 1 tablespoon freshly squeezed lemon juice Whole Milk (1 cup) 1 cup fat free milk plus 1 tbsp unsaturated Oil like canola (HV) 1 cup low fat milk (HV) Heavy Cream (1 cup) 3/4 cup milk plus 1/3 cup melted butter.(whipping wont work) Sour Cream (1 cup) (pls refer also Substitutes for Fats in Baking below) 7/8 cup buttermilk or sour milk plus 3 tablespoons butter. 1 cup thickened yogurt plus 1 teaspoon baking soda. 3/4 cup sour milk plus 1/3 cup butter. 3/4 cup buttermilk plus 1/3 cup butter. Cooked sauces: 1 cup yogurt plus 1 tablespoon flour plus 2 teaspoons water. Cooked sauces: 1 cup evaporated milk plus 1 tablespoon vinegar or lemon juice. Let stand 5 minutes to thicken. Dips: 1 cup yogurt (drain through a cheesecloth-lined sieve for 30 minutes in the refrigerator for a thicker texture). Dips: 1 cup cottage cheese plus 1/4 cup yogurt or buttermilk, briefly whirled in a blender. Dips: 6 ounces cream cheese plus 3 tablespoons milk,briefly whirled in a blender. Lower fat: 1 cup low-fat cottage cheese plus 1 tablespoon lemon juice plus 2 tablespoons skim milk, whipped until smooth in a blender. Lower fat: 1 can chilled evaporated milk whipped with 1 teaspoon lemon juice. 1 cup plain yogurt plus 1 tablespoon cornstarch 1 cup plain nonfat yogurt Substitutes for Fats in Baking * * (HV) denoted Healthy Version for low fat or fat free substitution in Baking Butter (1 cup) 1 cup trans-free vegetable shortening 3/4 cups of vegetable oil (example. Canola oil) Fruit purees (example- applesauce, pureed prunes, baby-food fruits). Add it along with some vegetable oil and reduce any other sweeteners needed in the recipe since fruit purees are already sweet. 1 cup polyunsaturated margarine (HV) 3/4 cup polyunsaturated oil like safflower oil (HV) 1 cup mild olive oil (not extra virgin)(HV) Note: Butter creates the flakiness and the richness which an oil/purees cant provide. If you don’t want to compromise that much to taste, replace half the butter with the substitutions. Shortening(1 cup) 1 cup polyunsaturated margarine like Earth Balance or Smart Balance(HV) 1 cup + 2tbsp Butter ( better tasting than shortening but more expensive and has cholesterol and a higher level of saturated fat; makes cookies less crunchy, bread crusts more crispy) 1 cup + 2 tbsp Margarine (better tasting than shortening but more expensive; makes cookies less crunchy, bread crusts tougher) 1 Cup – 2tbsp Lard (Has cholesterol and a higher level of saturated fat) Oil equal amount of apple sauce stiffly beaten egg whites into batter equal parts mashed banana equal parts yogurt prune puree grated raw zucchini or seeds removed if cooked. Works well in quick breads/muffins/coffee cakes and does not alter taste pumpkin puree (if the recipe can handle the taste change) Low fat cottage cheese (use only half of the required fat in the recipe). Can give rubbery texture to the end result Silken Tofu – (use only half of the required fat in the recipe). Can give rubbery texture to the end result Equal parts of fruit juice Note: Fruit purees can alter the taste of the final product is used in large quantities. Cream Cheese (1 cup) 4 tbsps. margarine plus 1 cup low-fat cottage cheese – blended. Add few teaspoons of fat-free milk if needed (HV) Heavy Cream (1 cup) 1 cup evaporated skim milk (or full fat milk) 1/2 cup low fat Yogurt plus 1/2 low fat Cottage Cheese (HV) 1/2 cup Yogurt plus 1/2 Cottage Cheese Sour Cream (1 cup) 1 cup plain yogurt (HV) 3/4 cup buttermilk or plain yogurt plus 1/3 cup melted butter 1 cup crème fraiche 1 tablespoon lemon juice or vinegar plus enough whole milk to fill 1 cup (let stand 5-10 minutes) 1/2 cup low-fat cottage cheese plus 1/2 cup low-fat or nonfat yogurt (HV) 1 cup fat-free sour cream (HV) Note: How to Make Maple Syrup Substitute at home For 1 Cup Maple Syrup 1/2 cup granulated sugar 1 cup brown sugar, firmly packed 1 cup boiling water 1 teaspoon butter 1 teaspoon maple extract or vanilla extract Method In a heavy saucepan, place the granulated sugar and keep stirring until it melts and turns slightly brown. Alternatively in another pan, place brown sugar and water and bring to a boil without stirring. Now mix both the sugars and simmer in low heat until they come together as one thick syrup. Remove from heat, add butter and the extract. Use this in place of maple syrup. Store it in a fridge in an air tight container. Even though this was posted in their site long back, I found it helpful. So posting it for you. via chefinyou . cc image credit: flickr/zetrules

    Read the article

  • Reflections based on distance from plane

    - by Andrea Benedetti
    Let's consider, for example, a surface like the volleyball court, we can see that legs and shoes of the players are reflected, with a blur effect, but body and stadium don't (as each object not near to the court). I've already made a reflection effect, but it works as a specular reflection, and I need to achieve an effect like the photo above. So, I would like to make a reflection that is based on the distance between the object and the plane, in this manner a close object would reflect more than an object that is positioned far away from the plane. What is the best way to achieve this effect? My first idea was to use the depth value (taken from the reflected camera), and use that value to blend between reflection and court. But I don't know if it's a correct way. Edit: as rendering engine I use Ogre that already provides a reflections system: reflecting the camera through a plane (obviously I can select the models to draw from the reflected camera). After a render to texture pass I can blend the reflected texture with the original plane. So, if possible, I'm looking for a way that best suits my system.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >