Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 43/84 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Google Earth-Unsupported graphics card

    - by VIPaul
    I've just installed Google Earth on my PC,which runs Ubuntu 12.04 LTS. When I open Google Earth,a window pop-ups and says:"Unsupported Graphics Card Your graphics card does nor meet the minimum spec required to run Google Earth,which is a 3D accelerated card with shader support.It is strongly recommended that you try running Google Earth on a different machine or in a different rendering mode or upgrade to a newer graphics card.You may continue,but the application is unlikely to work." Maybe you'll say:"Buy a better graphics card!",but I used Google Earth on this machine an year ago,when I had Windows 7 & everything worked well,so my graphics card is good enough. The Linux version has bigger requirements than the Windows one or what???

    Read the article

  • How to acheive a smoother lighting effect

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • Artifacts when draw particles with some alpha

    - by piotrek
    I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); My fragment shader for particle is very simple: precision highp float; precision highp int; uniform sampler2D u_maps[2]; varying vec2 v_texture; uniform float opaque; uniform vec3 colorize; void main() { vec4 texColor = texture2D(u_maps[0], v_texture); gl_FragColor.rgb = texColor.rgb * colorize.rgb; gl_FragColor.a = texColor.a * opaque; } I attach screenshot from this: Do you know what I made wrong ? I use OpenGL ES 2.0.

    Read the article

  • How do I use unpackHalf2x16?

    - by user1032861
    I'm trying to use (un)packHalf2x16, without success so far. I'm drawing with: glVertexAttribIPointer(0, 2, GL_UNSIGNED_INT, 0, 0); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vbo); glDrawArrays(GL_POINTS, 0, n_points); glDisableVertexAttribArray(0); and on the shader #version 330 core #extension GL_ARB_shading_language_packing : require in uvec2 A0; // (...) vec4 t = vec4(unpackHalf2x16(A0.x), unpackHalf2x16(A0.y)); But nothing gets drawn. I'm pretty sure buffer's content is right, and if I use vec4 t = vec4(0); I can see it's working properly. How is this packing / unpacking thing supposed to work? I can't find any example.

    Read the article

  • Good resources for learning about graphics hardware

    - by Ken
    I'm looking for some good learning resources for graphics hardware (and associated low level software). Basically I want to learn more about what goes on underneath the opengl/direcx API layers in terms of how things are implemented. I familiar with what happens in principle during the various stages of the rendering pipeline (viewing, projection, clipping, rasterization etc). My goal is to be able to make better and more informed decisions about tradeoffs and potential optimisations when graphics/shader programming with respect to the following kinds of issues; batching view culling occlusions draw order avoiding state changes triangles vs pointsprites texture sampling etc Basically whatever the graphics programmer needs to know about modern graphics hardware in order to become more effective. I'm not really looking for specific optimisation techniques, rather I need more general knowledge so that I will naturally write more efficient code.

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Is it only possible to display 64k vertices on the monitor with 16bit?

    - by Aufziehvogel
    I did the first 3D tutorial over at riemers.net and stumbled upon that my graphic card only supports Shader 2.0 (Reach profile in XNA) which means I can only use Int16 to store the indices (triangle to vertex). This means that I can only store 2^16 = 65536 vertices. Also I read on the internet that you should prefer 16-bit over 32-bit because not all hardware (like mine) does support 32-bit. Yet, I am wondering: Do really all game scenes get along with only so little vertices? I though already faces of people used a lot of polygons (which are made up of vertices?). It’s not relevant for me yet, but I am interested: Do game scenes use only 65536 vertices? Do you use some trade-off to display more (e.g. 64k in GPU buffer rest on RAM) Is there some method to get more into the GPU buffer? I already read on some other posts that there seems to be a limit of 64k per mesh too, so maybe you can compact stuff to meshes?

    Read the article

  • How many textures can usually I bind at once?

    - by Avi
    I'm developing a game engine, and it's only going to work on modern (Shader model 4+) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is: how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that amount? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards.

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

  • OpenGL ES 2.0. Sprite Sheet Animation

    - by Project Dumbo Dev
    I've found a bunch of tutorials on how to make this work on Open GL 1 & 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit: Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas/batch and make a FBO/loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions.

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • How can I acheive a smooth 2D lighting effect?

    - by Cyral
    I'm making a tile based game in XNA. So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter. EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it.

    Read the article

  • Multiply mode in SpriteBatch

    - by ashes999
    I have a "lighting" texture (black background with white or colours for lights) that I want to draw as a multiplcation operation. SpriteBatch.Begin can specify BlendState.Additive, but there's no BlendState.Multiplicative. I also tried the solution in this answer, but it didn't work -- even when I (incorrectly?) changed the code to work with XNA 4 style ColorDestinationBlend, I ended up with the final solution being inverted (black area where the light is, everything else is visible). I initially thought of a shader, but I couldn't get shaders to work with MonoGame, so I'm falling back to SpriteBatch.

    Read the article

  • Huge 2d pixelized world

    - by aspcartman
    I would like to make a game field in a indie-strategic 2d game to be some a-like this popular picture. http://0.static.wix.com/media/6a83ae_cd307e45ffd9c6b145237263ac1a86be.jpg_1024 So every "pixel"(blocks) changes it's color slowly, sometimes a bright color wave happens, etc, but the spaces beetwen this pixels should stay dark (not to count shades, lightning and other 3rd party stuff going on). Units are going to be same "pixelized" and should position them-selfs according those blocks. I have some experience in game-developing, but this task seems not trivial for me. What approaches (shader, tons of sprites or code-render, i don't now) would you recommend me to follow? (I'm thinking of making this game using Unity Engine) Thanks everyone! :)

    Read the article

  • CGBitmapContextCreate on the iPhone/iPad

    - by toastie
    Hello, I have a method that needs to parse through a bunch of large PNG images pixel by pixel (the PNGs are 600x600 pixels each). It seems to work great on the Simulator, but on the device (iPad), i get an EXC_BAD_ACCESS in some internal memory copying function. It seems the size is the culprit because if I try it on smaller images, everything seems to work. Here's the memory related meat of method below. + (CGRect) getAlphaBoundsForUImage: (UIImage*) image { CGImageRef imageRef = [image CGImage]; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *rawData = malloc(height * width * 4); memset(rawData,0,height * width * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * width; NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); /* non-memory related stuff */ free(rawData); When I run this on a bunch of images, it runs 12 times and then craps out, while on the simulator it runs no problem. Do you guys have any ideas?

    Read the article

  • Interpolation and Morphing of an image in labview and/or openCV

    - by Marc
    I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map proj_x, proj_y <-- cam_x, cam_y for scattered point pairs My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows [pgridx, pgridy] = meshgrid(allprojxpts, allprojypts) fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy); fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy); and the reverse for the camera to projector mapping Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle) I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick. So my question is one of the following 1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ? 2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV. Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"

    Read the article

  • Eigenvector computation using OpenCV

    - by Andriyev
    Hi I have this matrix A, representing similarities of pixel intensities of an image. For example: Consider a 10 x 10 image. Matrix A in this case would be of dimension 100 x 100, and element A(i,j) would have a value in the range 0 to 1, representing the similarity of pixel i to j in terms of intensity. I am using OpenCV for image processing and the development environment is C on Linux. Objective is to compute the Eigenvectors of matrix A and I have used the following approach: static CvMat mat, *eigenVec, *eigenVal; static double A[100][100]={}, Ain1D[10000]={}; int cnt=0; //Converting matrix A into a one dimensional array //Reason: That is how cvMat requires it for(i = 0;i < affnDim;i++){ for(j = 0;j < affnDim;j++){ Ain1D[cnt++] = A[i][j]; } } mat = cvMat(100, 100, CV_32FC1, Ain1D); cvEigenVV(&mat, eigenVec, eigenVal, 1e-300); for(i=0;i < 100;i++){ val1 = cvmGet(eigenVal,i,0); //Fetching Eigen Value for(j=0;j < 100;j++){ matX[i][j] = cvmGet(eigenVec,i,j); //Fetching each component of Eigenvector i } } Problem: After execution I get nearly all components of all the Eigenvectors to be zero. I tried different images and also tried populating A with random values between 0 and 1, but the same result. Few of the top eigenvalues returned look like the following: 9805401476911479666115491135488.000000 -9805401476911479666115491135488.000000 -89222871725331592641813413888.000000 89222862280598626902522986496.000000 5255391142666987110400.000000 I am now thinking on the lines of using cvSVD() which performs singular value decomposition of real floating-point matrix and might yield me the eigenvectors. But before that I thought of asking it here. Is there anything absurd in my current approach? Am I using the right API i.e. cvEigenVV() for the right input matrix (my matrix A is a floating point matrix)? cheers

    Read the article

  • Tiff Analyzer

    - by Kevin
    I am writing a program to convert some data, mainly a bunch of Tiff images. Some of the Tiffs seems to have a minor problem with them. They show up fine in some viewers (Irfanview, client's old system) but not in others (Client's new system, Window's picture and fax viewer). I have manually looked at the binary data and all the tags seem ok. Can anyone recommend an app that can analyze it and tell me what, if anything, is wrong with it? Also, for clarity sake, I'm only converting the data about the images which is stored seperately in a database and copying the images, I'm not editting the images myself, so I'm pretty sure I'm not messing them up. UDPATE: For anyone interested, here are the tags from a good and bad file: BAD Tag Type Length Value 256 Image Width SHORT 1 1652 257 Image Length SHORT 1 704 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 273 Strip Offsets LONG 1 210 (d2 Hex) 274 Orientation SHORT 1 3 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip SHORT 1 450 279 Strip Byte Counts LONG 1 7264 (1c60 Hex) 282 X Resolution RATIONAL 1 <194 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <202 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 296 Resolution Unit SHORT 1 2 Good Tag Type Length Value 254 New Subfile Type LONG 1 0 (0 Hex) 256 Image Width SHORT 1 1193 257 Image Length SHORT 1 788 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 270 Image Description ASCII 45 256 273 Strip Offsets LONG 1 1118 (45e Hex) 274 Orientation SHORT 1 1 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip LONG 1 788 (314 Hex) 279 Strip Byte Counts LONG 1 496 (1f0 Hex) 280 Min Sample Value SHORT 1 0 281 Max Sample Value SHORT 1 1 282 X Resolution RATIONAL 1 <301 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <309 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 293 Group 4 Options LONG 1 0 (0 Hex) 296 Resolution Unit SHORT 1 2

    Read the article

  • deleting HBITMAP causes an access violation at runtime.

    - by Oliver
    Hi, I have the following code to take a screenshot of a window, and get the colour of a specific pixel in it: void ProcessScreenshot(HWND hwnd){ HDC WinDC; HDC CopyDC; HBITMAP hBitmap; RECT rt; GetClientRect (hwnd, &rt); WinDC = GetDC (hwnd); CopyDC = CreateCompatibleDC (WinDC); //Create a bitmap compatible with the DC hBitmap = CreateCompatibleBitmap (WinDC, rt.right - rt.left, //width rt.bottom - rt.top);//height SelectObject (CopyDC, hBitmap); BitBlt (CopyDC, //destination 0,0, rt.right - rt.left, //width rt.bottom - rt.top, //height WinDC, //source 0, 0, SRCCOPY); COLORREF col = ::GetPixel(CopyDC,145,293); // Do some stuff with the pixel colour.... delete hBitmap; ReleaseDC(hwnd, WinDC); ReleaseDC(hwnd, CopyDC); } the line 'delete hBitmap;' causes a runtime error: an access violation. I guess I can't just delete it like that? Because bitmaps take up a lot of space, if I don't get rid of it I will end up with a huge memory leak. My question is: Does releasing the DC the HBITMAP is from deal with this, or does it stick around even after I have released the DC? If the later is the case, how do I correctly get rid of the HBITMAP?

    Read the article

  • QT- QImage and multi-threading problem.

    - by umanga
    Greetings all, Please refer to image at : http://i48.tinypic.com/316qb78.jpg We are developing an application to extract cell edges from MRC images from electron microscope. MRC file format stores volumetric pixel data (http://en.wikipedia.org/wiki/Voxel) and we simply use 3D char array(char***) to load and store data (gray scale values) from a MRC file. As shown in the image,there are 3 viewers to display XY,YZ and ZX planes respectively. Scrollbars on the top of the viewers use to change the image slice along an axis. Here is the steps we do when user changes the scrollbar position. 1) get the new scrollbar value.(this is the selected slice) 2) for the relavant plane (YZ,XY or ZX), generate (char* slice;) array for the selected slice by reading 3D char array (char***) 3) Create a new QImage* (Format_RGB888) and set pixel values by reading 'slice' (using img-setPixel(x,y,c);) 4) This new QImage* is painted in the paintEvent() method. We are going to execute "edge-detection" process in a seperate thread since it is an intensive process.During this process we need to draw detected curve (set of pixels) on top of above QImage*.(as a layer).This means we need to call drawPoint() methods outside the QT thread. Is it the best wayto use QImage for this case? What is the best way to execute QT drawing methods from another thread? thanks in advance,

    Read the article

  • NSString sizeWithFont: returning inconsistent results? known bug?

    - by Olof Hedman
    I'm trying to create a simple custom UIView wich contain a string drawn with a single font, but where the first character is slightly larger. I thought this would be easily implemented with two UILabel:s placed next to eachother. I use NSString sizeWithFont to measure my string to be able to lay it out correctly. But I noticed that the font baseline in the returned rectangle varies with +/- 1 pixel depending on the font size I set. Here is my code: NSString* ctxt = [text substringToIndex:1]; NSString* ttxt = [text substringFromIndex:1]; CGSize sz = [ctxt sizeWithFont: cfont ]; clbl = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, sz.width, sz.height)]; clbl.text = ctxt; clbl.font = cfont; clbl.backgroundColor = [UIColor clearColor]; [contentView addSubview:clbl]; CGSize sz2 = [ttxt sizeWithFont: tfont]; tlbl = [[UILabel alloc] initWithFrame:CGRectMake(sz.width, (sz.height - sz2.height), sz2.width, sz2.height)]; tlbl.text = ttxt; tlbl.font = tfont; tlbl.backgroundColor = [UIColor clearColor]; [contentView addSubview:tlbl]; If I use 12.0 and 14.0 as sizes, it works fine. But if I instead use 13.0 and 15.0, then the first character is 1 pixel too high. Is this a known problem? Any suggestions how to work around it? Creating a UIWebView with a CSS and HTML page seems way overkill for this. and more work to handle dynamic strings. Is that what I'm expected to do?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >