Search Results

Search found 11395 results on 456 pages for 'intel integrated graphics'.

Page 98/456 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Java : VolatileImage slower than BufferedImage.

    - by Norswap
    I'm making a game in java and in used BufferedImages to render content to the screen. I had performance issues on low end machines where the game is supposed to run, so I switched to VolatileImage which are normally faster. Except they actually slow the whole thing down. The images are created with GraphicsConfiguration.createCompatibleVolatileImage(...) and are drawn to the screen with Graphics.drawImage(...) (follow link to see which one specifically). They are drawn upon a Canvas using double buffering. Does someone has an idea of what is going wrong here ?

    Read the article

  • Efficiently draw a grid in Windows Forms

    - by Joel
    I'm writing an implementation of Conway's Game of Life in C#. This is the code I'm using to draw the grid, it's in my panel_Paint event. g is the graphics context. for (int y = 0; y < numOfCells * cellSize; y += cellSize) { for (int x = 0; x < numOfCells * cellSize; x += cellSize) { g.DrawLine(p, x, 0, x, y + numOfCells * cellSize); g.DrawLine(p, 0, x, y + size * drawnGrid, x); } } When I run my program, it is unresponsive until it finishes drawing the grid, which takes a few seconds at numOfCells = 100 & cellSize = 10. Removing all the multiplication makes it faster, but not by very much. Is there a better/more efficient way to draw my grid? Thanks

    Read the article

  • Looking for a mobile platform to view vector data and use it like a simple map

    - by Orchestrator
    I would like to develop or use an existing platform that will allow me to view custom vector data and use it as a map on mobile phones such as Android/IPhone (Maybe even WP7). I'm hoping that there's already a good infrastructure for what I need so I would not need to develop a whole infrastructure by myself. In Conclusion - Is there any existing platform that may answer my needs? If not, how would you guys suggest I should begin? How should I save my vector data? How could I read it? Should I view it with a graphics engine like OpenGL? Is there any chance this solution could be cross-platform? I know that it's possible since it was already done with apps like Waze. And it works the same on iOS and Android. Thanks!

    Read the article

  • How is the implicit segment register of a near pointer determined?

    - by Daniel Trebbien
    In section 4.3 of Intel 64® and IA-32 Architectures Software Developer's Manual. Volume 1: Basic Architecture, it says: A near pointer is a 32-bit offset ... within a segment. Near pointers are used for all memory references in a flat memory model or for references in a segmented model where the identity of the segment being accessed is implied. This leads me to wondering: how is the implied segment register determined? I know that (%eip) and displaced (%eip) (e.g. -4(%eip)) addresses use %cs by default, and that (%esp) and displaced (%esp) addresses use %ss, but what about (%eax), (%edx), (%edi), (%ebp) etc., and can the implicit segment register depend also on the instruction that the memory address operand appears in?

    Read the article

  • Java SWT - dissolve (fade) from one image to the next.

    - by carillonator
    I'm pretty new to Java and SWT, hoping to have one image dissolve into the next. I have one image now in a Label (relevant code): Device dev = shell.getDisplay(); try { Image photo = new Image(dev, "photo.jpg"); } catch(Exception e) { } Label label = new Label(shell, SWT.IMAGE_JPEG); label.setImage(photo); Now I'd like photo to fade into a different image at a speed that I specify. Is this possible as set up here, or do I need to delve more into the org.eclipse.swt.graphics api? Also, this will be a slide show that may contain hundreds of photos, only ever moving forward (never back to a previous image)---considering this, is there something I explicitly need to do in order to remove the old images from memory? Thanks!!

    Read the article

  • Why do we need normalized coordinate system? Options

    - by jcyang
    Hi, I have problem understand following sentences in my textbook Computer Graphics with OpenGL. "To make viewing process independent of the requirements of any output device,graphic system convert object descriptions to normalized coordinates and apply the clipping routines." Why normalized coordinates could make viewing process independent of the requirements of any output devices? Isn't the projection coordinates already independent of output device?We only need to first scale and then translate the projection coordinate then we will get device coordinate. So why do we need first convert the projection coordinate to normalized coordinate first? "Clipping is usually performed in normlized coordinates.This allows us to reduce computations by first concatenating the various transformation matrices" Why clipping is usually performed in normlized coordinates? What kind of transformation concatenated? thanks. jcyang.

    Read the article

  • Will the Driver Support for Intel HD Graphics be Improved in 12.10?

    - by Hiranya
    I recently installed Ubuntu 12.04 on a HP Pavilion dv4 laptop. This is a core i7 machine with Intel HD graphics and also a separate nVidia VGA card. I had a lot of issues getting Ubuntu 12.04 working on this system. First there were issues booting up the live CD for installation. I worked around that by using the 'nomodeset' option. Then I continued to have similar issues after installation has completed. So I had to permanently add the nomodeset option to my GRUB boot configuration. At the moment I have a working installation but there are many issues: Ubuntu GUI is a bit flaky at times. The mouse pointer goes on and off when hovering over certain icons. Certain things doesn't get rendered properly on the screen. I can't access any of the tty consoles. Hitting Ctrl+Alt+F[1-6] gives me a blank screen. And once that happens I can't even come back to the UI by hitting Ctrl+Alt+F7. I've realized that tty consoles are actually working. I just can't see the text. If I enter a command like 'sudo reboot' into the empty screen the machine reboots. Can't get external displays (monitors, projectors etc) working. But I think this is probably because the VGA out is wired to the nVidia card which is not being used by Linux. colord program crashes every now and then triggering a popup message. So my main question is, will the support for Intel HD graphics be improved in the next release? Or will I have to keep using the nomodeset option in the new release too? Also I appreciate if anybody can shed some light on any of the issues listed above. Thanks in advance.

    Read the article

  • What to do about "system running in low-graphics mode"?

    - by ubuntubabe
    My Dell which was 5 years old suddenly karked it and I had the "low graphics" black screen and useless dialogue box. As I believed it was a dead graphics card I went out and bought a brand new machine. I put aside the new machine and tried again in vain to open the Dell. I eventually got to the command line via Ctrl+Alt+F1. I logged into my account from there, and simply started a series of sudo apt-get remove of various softwares that I knew were installed on my PC (software without any great consequence like Google earth, tweak, Skype etc). Lo and behold after a sudo reboot my computer was fine again! So now I have 2 computers. BUT one week after buying the other one and installing 12.04 because I love Ubuntu, the SAME PROBLEM arrived! I once again deleted Google earth, Skype, and did a sudo reboot and everything worked as before. I think there is a bug or something in 12.04 as this problem has never arisen with any other versions of Ubuntu.

    Read the article

  • How to use Windows login for single-sign-on and for Active Directory entries for Desktop Java applic

    - by Touko
    I'd like to have my desktop Java application to have single sign on related to Active Directory users. In two steps, I'd like to : Be sure that the particular user has logged in to Windows with some user entry. Check out some setup information for that user from the Active Directory With http://stackoverflow.com/questions/31394/java-programatic-way-to-determine-current-windows-user I can get the name of the current Windows user but can I rely to that? I think the System.getProperty("user.name") won't be secure enough? ("user.name" seems to be got from environment variables, so I can't rely on that, I think?) Question http://stackoverflow.com/questions/390150/authenticating-against-active-directory-with-java-on-linux provides me the authentication for given name+pass but I'd like to authenticate based on the Windows logon? For the Active Directory access, the LDAP would probably be the choise? I'm not totally sure if I'm asking the right questions but hopefully somebody has some ideas to forward me on.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Problem using custom HttpHandler to process requests for both .aspx and non-extension pages in IIS7

    - by Noel
    I am trying to process both ".aspx" and non-extension page requests (i.e. both contact.aspx and /contact/) using a custom HttpHandler in IIS7. My handler works just fine in either one case or the other, but as soon as I try to process both cases, it only works for one. Please see Handlers snippet from my web.config below: If i keep only mapping to "*.aspx" then all .aspx requests are processed correctly, but obviously extensionless requests won't work: <add name="AllPages.ASPX" path="*.aspx" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> If i change the mapping to "*" then all extensionless requests are processed correctly, but ".aspx" requests that should still be handled by this handler stop working. Note that i added the StaticFiles entry in order to process files that are on disk like images, css, js, etc. <add name="WebResource" path="WebResource.axd" verb="GET" type="System.Web.Handlers.AssemblyResourceLoader" /> <add name="StaticFiles" verb="GET,HEAD" path="*.*" type="System.Web.StaticFileHandler" resourceType="File" /> <add name="AllPages" path="*" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> The crazy thing is that when i load an ".aspx" request (with the 2nd configuration shown) IIS7 gives a 404 not found error. The error also says that the request is processed by the StaticFiles handler. But I made sure to add resourceType="File" to the StaticFileHandler in order to avoid this. According to MS this means the request is only for "physical files on disk". Am i misreading/interpreting the "on disk" part? My .aspx file isn't on disk, that's why i want to use the handler in the first place.

    Read the article

  • Bastion - Indie Humble Bundle

    - by user68008
    I have downloaded Bastion for Ubuntu and installed in the home folder normally. When executing "Games Bastion" nothing happens. Running Bastion directly from the installation folder results in the error below Unhandled Exception: System.EntryPointNotFoundException: glProgramParameteri at (wrapper managed-to-native) OpenTK.Graphics.OpenGL.GL/Core:ProgramParameteri (uint,OpenTK.Graphics.OpenGL.AssemblyProgramParameterArb,int) at OpenTK.Graphics.OpenGL.GL.ProgramParameter (Int32 program, AssemblyProgramParameterArb pname, Int32 value) [0x00000] in <filename unknown>:0 at Microsoft.Xna.Framework.Graphics.EffectPass.ApplyPass () [0x00000] in <filename unknown>:0 at Microsoft.Xna.Framework.Graphics.Effect.DefineTechnique (System.String techniqueName, System.String passName, Int32 vertexIndex, Int32 fragmentIndex) [0x00000] in <filename unknown>:0 at Microsoft.Xna.Framework.Graphics.SpriteEffect..ctor (Microsoft.Xna.Framework.Graphics.GraphicsDevice graphicsDevice) [0x00000] in <filename unknown>:0 at Microsoft.Xna.Framework.Graphics.SpriteBatch..ctor (Microsoft.Xna.Framework.Graphics.GraphicsDevice graphicsDevice) [0x00000] in <filename unknown>:0 at GSGE.ExceptionGame.LoadContent () [0x00000] in <filename unknown>:0 <snip> I have tried some solutions on the internet, like adding OpenTK.dll.config the line below: <dllmap os="linux" dll="libXi" target="libXi.so.6"/> This didn't help. Also tried running as sudo and that didn't help. Some posts said that this might be a problem with Ubuntu noveau drivers. But I'm using the NVIDIA proprietary drivers. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS" OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce 9400 GT/PCI/SSE2 OpenGL version string: 3.2.0 NVIDIA 195.36.24 direct rendering: Yes

    Read the article

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Audio comes out of both headphone and speaker at the same time.. Ubuntu 12.04LTS [closed]

    - by pst007x
    I have the same issue on an Aspire. Ubuntu 12.04LTS 64bit realtek audio sound chip onboard If I plug in a headset, audio does not switch from internal speaker to headset, instead plays out of both at the same time. I have looked at the alsamixer setting, all on. I installed gnome-alsamixer, and I noticed headphone was ticked, if I untick the main audio mutes, and the headphone no longer works. Headset only works with internal speaker. Audio works fine on my other desktop and laptop running this release 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) salvatore@salvatore-Aspire-7730:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. salvatore@salvatore-Aspire-7730:~$ head -n 1 /proc/asound/card*/codec#* ==> /proc/asound/card0/codec#0 <== Codec: Realtek ALC888 ==> /proc/asound/card0/codec#1 <== Codec: LSI ID 1040 ==> /proc/asound/card0/codec#2 <== Codec: Intel Cantiga HDMI salvatore@salvatore-Aspire-7730:~$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: ALC888 Analog [ALC888 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC888 Digital [ALC888 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 salvatore@salvatore-Aspire-7730:~$ uname -a Linux salvatore-Aspire-7730 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux salvatore@salvatore-Aspire-7730:~$ The alsa-base.conf does not exist Tried this: sudo apt-get remove --purge alsa-base sudo apt-get remove --purge pulseaudio sudo apt-get install alsa-base sudo apt-get install pulseaudio sudo alsa force-reload Then: sudo apt-get purge pulseaudio gstreamer0.10-pulseaudio sudo apt-get install pulseaudio gstreamer0.10-pulseaudio indicator-sound Tred this. sudo gedit Then open terminal: sudo /etc/modprobe.d/alsa-base.conf At the end of the file add a new line: options snd-hda-intel model=generic Save and then reboot But alsa-base.conf does not exist

    Read the article

  • Ubuntu 12.04 dual monitor reset bug

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) Few minutes after I login to my dual monitor setup its get reset to mirror screens. If I restore the settings via displays application everything is fine. On each reset the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like it's interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Reset or clear the UIView from UILabels

    - by Nicsoft
    Hello, I have created an UIView on which I have a number of labels and I'm also drawing some lines in the same UIView. Sometimes I update the lines that I'm drawing. Now, the problem I am having is that when I update the lines, they get drawn according to my wish. But the labels are overwriting themselves. This shouldn't have been a problem if it wasn't for that the position is changed about 1 pixel and that makes the text go blurry. I don't know how to remove the labels before they are redrawn. I do remove the labels from the superview and add them back when drawRect is called, but the SetNeedDisplay doesn't clear the screen before the graphic is updated, I guess (I think I read that SetNeedsDisplay/drawRect doesn't clear the screen, just updating the content. Couldn't find the text now while searching)? What is the pattern to use here, should I create a retangle with the size of the screen (or the area where the labels are) and fill it with the background colour, or is there any other way to clear or reset the UIView (I don't want to release and create the UIView again). The view is created in IB and associated with a custom UIView. In IB I add some buttons and other static labels. The above labels and graphics is created programatically. Any comments would be helpful! Thanks in advance!

    Read the article

  • Double buffering with C# has negative effect

    - by Roland Illig
    I have written the following simple program, which draws lines on the screen every 100 milliseconds (triggered by timer1). I noticed that the drawing flickers a bit (that is, the window is not always completely blue, but some gray shines through). So my idea was to use double-buffering. But when I did that, it made things even worse. Now the screen was almost always gray, and only occasionally did the blue color come through (demonstrated by timer2, switching the DoubleBuffered property every 2000 milliseconds). What could be an explanation for this? using System; using System.Drawing; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Paint(object sender, PaintEventArgs e) { Graphics g = CreateGraphics(); Pen pen = new Pen(Color.Blue, 1.0f); Random rnd = new Random(); for (int i = 0; i < Height; i++) g.DrawLine(pen, 0, i, Width, i); } // every 100 ms private void timer1_Tick(object sender, EventArgs e) { Invalidate(); } // every 2000 ms private void timer2_Tick(object sender, EventArgs e) { DoubleBuffered = !DoubleBuffered; this.Text = DoubleBuffered ? "yes" : "no"; } } }

    Read the article

  • Producing CCITT compressed TIFF from CGImage

    - by Brian Postow
    I have a CGImage (core graphics, C/C++). It's grayscale. Well, originally it was B/W, but the CGImage may be RGB. That shouldn't matter. I want to create a CCITT-Group 4 TIFF. I can create an LZW TIFF (grayscale or color) via creating a destination with the correct dictionary and adding the image in. No problem. However, there doesn't seem to be an equivalent kCGImagePropertyTIFFCompression value to represent CCITT-4. It should be 4, but that produces uncompressed. I have a manual CCITT compression routine, so if I can get the binary (1 bit per pixel) data, I'm set. But I can't seem to get 1 BPP data out of a CGImage. I have code that is supposed to put the CGImage into a CGBitmapContext and then give me the data, but it seems to be giving me all black. I've asked a couple of questions today trying to get at this, but I just figured, lets ask the question I REALLY want answered and see if someone can answer it. There's GOT to be a way to do this. I've got to be missing something dumb. What is it?

    Read the article

  • Defining Light Coordinates

    - by Zachary
    I took a Computer Graphics exam a couple of days ago which had extra credit question like the following: A light can be defined in one of two ways. It can be defined in world coordinates, e.g. a street light, or in the viewer (eye coordinates), e.g., a head-lamp worn by a miner. In either case the viewpoint can freely change. Describe how the light should be transformed different in these two cases. Since I won't get to see the results of this until after spring break, I thought I would ask here. It seems like the analogies being used are misleading - could you not define a light source that is located at the viewers eye in world coordinates just as well as you could in eye coordinates? I've been doing some research on how OpenGL handles light, and it seems as though it always uses eye coordinates - the ModelView matrix would be applied to any light in world coordinates. In that case the answer may just be that you would have to transform light defined in world coordinates into eye coordinates using something like the ModelView matrix, while light defined in eye coordinates would only need to be transformed by the projection matrix. Then again I could be totally under thinking (or over thinking this). Another thought I had is that it determines which way you render shadows, but that has more to do with the location of the light and its type (point, directional, emission, etc) than what coordinates it is represented in. Any ideas?

    Read the article

  • Graphics.MeasureCharacterRanges giving wrong size calculations in C#.Net?

    - by Owen Blacker
    I'm trying to render some text into a specific part of an image in a Web Forms app. The text will be user entered, so I want to vary the font size to make sure it fits within the bounding box. I have code that was doing this fine on my proof-of-concept implementation, but I'm now trying it against the assets from the designer, which are larger, and I'm getting some odd results. I'm running the size calculation as follows: StringFormat fmt = new StringFormat(); fmt.Alignment = StringAlignment.Center; fmt.LineAlignment = StringAlignment.Near; fmt.FormatFlags = StringFormatFlags.NoClip; fmt.Trimming = StringTrimming.None; int size = __startingSize; Font font = __fonts.GetFontBySize(size); while (GetStringBounds(text, font, fmt).IsLargerThan(__textBoundingBox)) { context.Trace.Write("MyHandler.ProcessRequest", "Decrementing font size to " + size + ", as size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); size--; if (size < __minimumSize) { break; } font = __fonts.GetFontBySize(size); } context.Trace.Write("MyHandler.ProcessRequest", "Writing " + text + " in " + font.FontFamily.Name + " at " + font.SizeInPoints + "pt, size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); I then use the following line to render the text onto an image I'm pulling from the filesystem: g.DrawString(text, font, __brush, __textBoundingBox, fmt); where: __fonts is a PrivateFontCollection, PrivateFontCollection.GetFontBySize is an extension method that returns a FontFamily RectangleF __textBoundingBox = new RectangleF(150, 110, 212, 64); int __minimumSize = 8; int __startingSize = 48; Brush __brush = Brushes.White; int size starts out at 48 and decrements within that loop Graphics g has SmoothingMode.AntiAlias and TextRenderingHint.AntiAlias set context is a System.Web.HttpContext (this is an excerpt from the ProcessRequest method of an IHttpHandler) The other methods are: private static RectangleF GetStringBounds(string text, Font font, StringFormat fmt) { CharacterRange[] range = { new CharacterRange(0, text.Length) }; StringFormat myFormat = fmt.Clone() as StringFormat; myFormat.SetMeasurableCharacterRanges(range); using (Graphics g = Graphics.FromImage(new Bitmap( (int) __textBoundingBox.Width - 1, (int) __textBoundingBox.Height - 1))) { g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; Region[] regions = g.MeasureCharacterRanges(text, font, __textBoundingBox, myFormat); return regions[0].GetBounds(g); } } public static string Size(this RectangleF rect) { return rect.Width + "×" + rect.Height; } public static bool IsLargerThan(this RectangleF a, RectangleF b) { return (a.Width > b.Width) || (a.Height > b.Height); } Now I have two problems. Firstly, the text sometimes insists on wrapping by inserting a line-break within a word, when it should just fail to fit and cause the while loop to decrement again. I can't see why it is that Graphics.MeasureCharacterRanges thinks that this fits within the box when it shouldn't be word-wrapping within a word. This behaviour is exhibited irrespective of the character set used (I get it in Latin alphabet words, as well as other parts of the Unicode range, like Cyrillic, Greek, Georgian and Armenian). Is there some setting I should be using to force Graphics.MeasureCharacterRanges only to be word-wrapping at whitespace characters (or hyphens)? This first problem is the same as post 2499067. Secondly, in scaling up to the new image and font size, Graphics.MeasureCharacterRanges is giving me heights that are wildly off. The RectangleF I am drawing within corresponds to a visually apparent area of the image, so I can easily see when the text is being decremented more than is necessary. Yet when I pass it some text, the GetBounds call is giving me a height that is almost double what it's actually taking. Using trial and error to set the __minimumSize to force an exit from the while loop, I can see that 24pt text fits within the bounding box, yet Graphics.MeasureCharacterRanges is reporting that the height of that text, once rendered to the image, is 122px (when the bounding box is 64px tall and it fits within that box). Indeed, without forcing the matter, the while loop iterates to 18pt, at which point Graphics.MeasureCharacterRanges returns a value that fits. The trace log excerpt is as follows: Decrementing font size to 24, as size is 193×122 and limit is 212×64 Decrementing font size to 23, as size is 191×117 and limit is 212×64 Decrementing font size to 22, as size is 200×75 and limit is 212×64 Decrementing font size to 21, as size is 192×71 and limit is 212×64 Decrementing font size to 20, as size is 198×68 and limit is 212×64 Decrementing font size to 19, as size is 185×65 and limit is 212×64 Writing VENNEGOOR of HESSELINK in DIN-Black at 18pt, size is 178×61 and limit is 212×64 So why is Graphics.MeasureCharacterRanges giving me a wrong result? I could understand it being, say, the line height of the font if the loop stopped around 21pt (which would visually fit, if I screenshot the results and measure it in Paint.Net), but it's going far further than it should be doing because, frankly, it's returning the wrong damn results. Any and all help gratefully received. Thanks!

    Read the article

  • How to scan convert right edges and slopes less than one?

    - by Zachary
    I'm writing a program which will use scan conversion on triangles to fill in the pixels contained within the triangle. One thing that has me confused is how to determine the x increment for the right edge of the triangle, or for slopes less than or equal to one. Here is the code I have to handle left edges with a slope greater than one (obtained from Computer Graphics: Principles and Practice second edition): for(y=ymin;y<=ymax;y++) { edge.increment+=edge.numerator; if(edge.increment>edge.denominator) { edge.x++; edge.increment -= edge.denominator; } } The numerator is set from (xMax-xMin), and the denominator is set from (yMax-yMin)...which makes sense as it represents the slope of the line. As you move up the scan lines (represented by the y values). X is incremented by 1/(denomniator/numerator) ...which results in x having a whole part and a fractional part. If the fractional part is greater than one, then the x value has to be incremented by 1 (as shown in edge.incrementedge.denominator). This works fine for any left handed lines with a slope greater than one, but I'm having trouble generalizing it for any edge, and google-ing has proved fruitless. Does anyone know the algorithm for that?

    Read the article

  • Image drawing library for Haskell?

    - by absz
    I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best. I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles). My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions? * That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >