Search Results

Search found 2796 results on 112 pages for 'pixel fonts'.

Page 59/112 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Start the Control Panel item Windows Update with WinExec.

    - by Bill
    Windows Vista Canonical Names The Microsoft website says "In Windows Vista and later, the preferred method of launching a Control Panel item from a command line is to use the Control Panel item's canonical name." According to the Microsoft website this should work: The following example shows how an application can start the Control Panel item Windows Update with WinExec. WinExec("%systemroot%\system32\control.exe /name Microsoft.WindowsUpdate", SW_NORMAL); For Delphi 2010 I tried: var CaptionString: string; Applet: string; Result: integer; ParamString: string; CaptionString := ListviewApplets1.Items.Item[ ListviewApplets1.ItemIndex ].Caption; if CaptionString = 'Folder Options' then { 6DFD7C5C-2451-11d3-A299-00C04F8EF6AF } Applet := 'Microsoft.FolderOptions' else if CaptionString = 'Fonts' then {93412589-74D4-4E4E-AD0E-E0CB621440FD} Applet := 'Microsoft.Fonts' else if CaptionString = 'Windows Update' then { 93412589-74D4-4E4E-AD0E-E0CB621440FD } Applet := 'Microsoft.WindowsUpdate' else if CaptionString = 'Game Controllers' then { 259EF4B1-E6C9-4176-B574-481532C9BCE8 } Applet := 'Microsoft.GameControllers' else if CaptionString = 'Get Programs' then { 15eae92e-f17a-4431-9f28-805e482dafd4 } Applet := 'Microsoft.GetPrograms' //... ParamString := ( SystemFolder + '\control.exe /name ' ) + Applet; WinExec( ParamString, SW_NORMAL); <= This does not execute and when I trapped the error it returned ERROR_FILE_NOT_FOUND. I tried a ExecAndWait( ParamString ) method and it works perfectly with the same ParamString used with WinExec: ParamString := ( SystemFolder + '\control.exe /name ' ) + Applet; ExecAndWait( ParamString ); <= This executes and Runs perfectly The ExecAndWait method I used creates a Windows.CreateProcess... if Windows.CreateProcess( nil, PChar( CommandLine ), nil, nil, False, 0, nil, nil, StartupInfo, ProcessInfo ) then begin try My Question Does WinExec require a different ParamString or am I doing this wrong with WinExec? I did not post the full ExecAndWait method but I can if someone wants to see it...

    Read the article

  • java setting resolution and print size for an Image

    - by Ingrid
    I wrote a program that generates a BufferedImage to be displayed on the screen and then printed. Part of the image includes grid lines that are 1 pixel wide. That is, the line is 1 pixel, with about 10 pixels between lines. Because of screen resolution, the image is displayed much bigger than that, with several pixels for each line. I'd like to draw it smaller, but when I scale the image (either by using Image.getScaledInstance or Graphics2D.scale), I lose significant amounts of detail. I'd like to print the image as well, and am dealing with the same problem. In that case, I am using this code to set the resolution: HashPrintRequestAttributeSet set = new HashPrintRequestAttributeSet(); PrinterResolution pr = new PrinterResolution(250, 250, ResolutionSyntax.DPI); set.add(pr); job.print(set); which works to make the image smaller without losing detail. But the problem is that the image is cut off at the same boundary as if I hadn't set the resolution. I'm also confused because I expected a larger number of DPI to make a smaller image, but it's working the other way. I'm using java 1.6 on Windows 7 with eclipse.

    Read the article

  • How to convert Vector Layer coordinates into Map Latitude and Longitude in Openlayers.

    - by Jenny
    I'm pretty confused. I have a point: x= -12669114.702301 y= 5561132.6760608 That I got from drawing a square on a vector layer with the DrawFeature controller. The numbers seem...erm...awfull large, but they seem to work, because if I later draw a square with all the same points, it's in the same position, so I figure they have to be right. The problem is when I try to convert this point to latitude and longitude. I'm using: map.getLonLatFromPixel(pointToPixel(points[0])); Where points[0] is a geometry Point, and the pointToPixel function takes any point and turns it into a pixel (since the getLonLatFromPixel needs a pixel). It does this by simply taking the point's x, and making it the pixels x, and so on. The latitude and longitude I get is on the order of: lat: -54402718463.864 lng: -18771380.353223 This is very clearly wrong. I'm left really confused. I try projecting this object, using: .transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject()); But I don't really get it and am pretty sure I did it incorrectly, anyways. My code is here: http://pastie.org/909644 I'm sort of at a loss. The coordinates seem consistent, because I can reuse them to get the same result...but they seem way larger than any of the examples I'm seeing on the openLayers website...

    Read the article

  • PNG composition using GD and PHP

    - by Dominic
    I am trying to take a rectangular png and add depth using GD by duplicating the background and moving it down 1 pixel and right 1 pixel. I am trying to preserve a transparent background as well. I am having a bunch of trouble with preserving the transparency. Any help would be greatly appreciated. Thanks! $obj = imagecreatefrompng('rectangle.png'); $depth = 5; $obj_width = imagesx($obj); $obj_height = imagesy($obj); imagesavealpha($obj, true); for($i=1;$i<=$depth;$i++){ $layer = imagecreatefrompng('rectangle.png'); imagealphablending( $layer, false ); imagesavealpha($layer, true); $new_obj = imagecreatetruecolor($obj_width+$i,$obj_height+$i); $new_obj_width = imagesx($new_obj); $new_obj_height = imagesy($new_obj); imagealphablending( $new_obj, false ); imagesavealpha($new_obj, true); $trans_color = imagecolorallocatealpha($new_obj, 0, 0, 0, 127); imagefill($new_obj, 0, 0, $trans_color); imagecopyresampled($new_obj, $layer, $i, $i, 0, 0, $obj_width, $obj_height, $obj_width, $obj_height); //imagesavealpha($new_obj, true); //imagesavealpha($obj, true); } header ("Content-type: image/png"); imagepng($new_obj); imagedestroy($new_obj);

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • XAML PixelGrid to Prevent Blurry Text

    - by Bodekaer
    Hi, Just wanted to share a small Grid I created, which can help prevent blurry text etc. as it adjusts the margin of the Grid to ensure a pixel perfect position and size of the grid. Works great e.g. for inside StackPanels with auto height Labels/TextBlocks. Here is the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Media; namespace Controls { class PixelGrid : Grid { protected override void OnRenderSizeChanged(SizeChangedInfo sizeInfo) { // POSITION Vector position = VisualTreeHelper.GetOffset(this); double targetX = Math.Round(position.X, MidpointRounding.ToEven); double targetY = Math.Round(position.Y, MidpointRounding.ToEven); double marginLeft = targetX - position.X; double marginTop = targetY - position.Y; // SIZE double targetHeight = Math.Round(sizeInfo.NewSize.Height, MidpointRounding.ToEven); double targetWidth = Math.Round(sizeInfo.NewSize.Width, MidpointRounding.ToEven); double marginBottom = targetHeight - sizeInfo.NewSize.Height; double marginRight = targetWidth - sizeInfo.NewSize.Width; // Adjust margin to ensure pixel width this.Margin = new Thickness(marginLeft, marginTop, marginRight, marginBottom); base.OnRenderSizeChanged(sizeInfo); } } }

    Read the article

  • HLSL - Combining textures

    - by b34r
    Hi All, I'm trying to combine two textures in HLSL - specifically, I want to take the alpha values from a base image, and the color data from an overlay image. My pixel shader for this looks like this: float4 PixelShaderFunction(VertexOut input) : COLOR0 { float4 baseColor = tex2D( BaseSampler, input.baseCoords.xy ).rgba; float4 overlayColor = tex2D( OverlaySampler, input.overlayCoords.xy ).rgba; float4 color; color.r = overlayColor.r; color.g = overlayColor.g; color.b = overlayColor.b; color.a = baseColor.a; return color.rgba; } and my blend state looks like this: BlendState bs = new BlendState(); bs.AlphaSourceBlend = Blend.SourceAlpha; bs.AlphaDestinationBlend = Blend.DestinationAlpha; bs.ColorSourceBlend = Blend.SourceColor; bs.ColorDestinationBlend = Blend.DestinationColor; What this leaves me with is a washed out version of what should be the overlay color. I've tried numerous permutations of the BlendState settings, and played with the pixel shader math quite a bit, but to no avail. Can anyone point me in the right direction? Thanks in advance =)

    Read the article

  • Locking a GDI+ Bitmap in Native C++?

    - by user146780
    I can find many examples on how to do this in managed c++ but none for unmanaged. I want to get all the pixel data as efficiently as possible, but some of the scan0 stuff I would need more info about so I can properly iterate through the pixel data and get each rgba value from it. right now I have this: Bitmap *b = new Bitmap(filename); if(b == NULL) { return 0; } UINT w,h; w = b->GetWidth(); h = b->GetHeight(); Rect *r = new Rect(0,0,w,h); BitmapData *lockdat; b->LockBits(r,ImageLockModeRead,PixelFormatDontCare,lockdat); delete(r); if(w == 0 && h == 0) { return 0; } Color c; std::vector<GLubyte> pdata(w * h * 4,0.0); for (unsigned int i = 0; i < h; i++) { for (unsigned int j = 0; j < w; j++) { b->GetPixel(j,i,&c); pdata[i * 4 * w + j * 4 + 0] = (GLubyte) c.GetR(); pdata[i * 4 * w + j * 4 + 1] = (GLubyte) c.GetG(); pdata[i * 4 * w + j * 4 + 2] = (GLubyte) c.GetB(); pdata[i * 4 * w + j * 4 + 3] = (GLubyte) c.GetA(); } } delete(b); return CreateTexture(pdata,w,h); How do I use lockdat to do the equivalent of getpixel? Thanks

    Read the article

  • Overlay an image over video using OpenGL ES shaders

    - by BlueVoodoo
    I am trying to understand the basic concepts of OpenGL. A week into it, I am still far from there. Once I am in glsl, I know what to do but I find getting there is the tricky bit. I am currently able to pass in video pixels which I manipulate and present. I have then been trying to add still image as an overlay. This is where I get lost. My end goal is to end up in the same fragment shader with pixel data from both my video and my still image. I imagine this means I need two textures and pass on two pixel buffers. I am currently passing the video pixels like this: glGenTextures(1, &textures[0]); //target, texture glBindTexture(GL_TEXTURE_2D, textures[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, buffer); Would I then repeat this process on textures[1] with the second buffer from the image? If so, do I then bind both GL_TEXTURE0 and GL_TEXTURE1? ...and would my shader look something like this? uniform sampler2D videoData; uniform sampler2D imageData; once I am in the shader? It seems no matter what combination I try, image and video always ends up being just video data in both these. Sorry for the many questions merged in here, just want to clear my many assumptions and move on. To clarify the question a bit, what do I need to do to add pixels from a still image in the process described? ("easy to understand" sample code or any types of hints would be appreciated).

    Read the article

  • How to calculate the y-pixels of someones weight on a graph? (math+programming question)

    - by RexOnRoids
    I'm not that smart like some of you geniuses. I need some help from a math whiz. My app draws a graph of the users weight over time. I need a surefire way to always get the right pixel position to draw the weight point at for a given weight. For example, say I want to plot the weight 80.0(kg) on the graph when the range of weights is 80.0 to 40.0kg. I want to be able to plug in the weight (given I know the highest and lowest weights in the range also) and get the pixel result 400(y) (for the top of the graph). The graph is 300 pixels high (starts at 100 and ends at 400). The highest weight 80kg would be plot at 400 while the lowest weight 40kg would be plot at 100. And the intermediate weights should be plotted appropriately. I tried this but it does not work: -(float)weightToPixel:(float)theWeight { float graphMaxY = 400; //The TOP of the graph float graphMinY = 100; //The BOTTOM of the graph float yOffset = 100; //Graph itself is offset 100 pixels in the Y direction float coordDiff = graphMaxY-graphMinY; //The size in pixels of the graph float weightDiff = self.highestWeight-self.lowestWeight; //The weight gap float pixelIncrement = coordDiff/weightDiff; float weightY = (theWeight*pixelIncrement)-(coordDiff-yOffset); //The return value return weightYpixel; }

    Read the article

  • OpenGL GL_LINES enpoints not joining

    - by old-school rules
    I'm having problems with the GL_LINES block... the lines in the sample below do not connect on the ends (although sometimes it randomly decides to connect a corner or two). Instead, the endpoints come within 1 pixel of one another (leaving a corner that is not fully squared; if that makes sense). It is a simple block to draw a solid 1-pixel rectangle. glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top, 0); glEnd(); The sample below seems to correct the problem, giving me sharp, square corners; but I can't accept it because I don't know why it's acting this way... glBegin(GL_LINES); glColor3b(cr, cg, cb); glVertex3i(pRect->left, pRect->top, 0); glVertex3i(pRect->right + 1, pRect->top, 0); glVertex3i(pRect->right, pRect->top, 0); glVertex3i(pRect->right, pRect->bottom + 1, 0); glVertex3i(pRect->right, pRect->bottom, 0); glVertex3i(pRect->left - 1, pRect->bottom, 0); glVertex3i(pRect->left, pRect->bottom, 0); glVertex3i(pRect->left, pRect->top - 1, 0); glEnd(); Any OpenGL programmers out there that can help, I would appreciate it :)

    Read the article

  • Controlling the fontsize across multiple browsers

    - by Matthias
    Hello, I've got 3 browsers on my WinXPpro: Firefox 3.5.2, Opera 10 and IE 7. Alle pages are displayed fine in FF. Opera and IE seem to have a very similar issue: Both upsize fonts eventhough zoom mode in both browsers is set to 100%. I tend to believe that this might be a system-wide setting, somewhere. Does anyone know this problem? Thanks in advance.

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • What is your favorite TN3270 Client?

    - by Vaibhav Bajpai
    I am using Mocha W32 TN3270 at work currently, and wondering what good alternatives exists? Recommendations on monospaced fonts for the client along with custom color settings would be appreciated as well. I am using Monaco with the default color settings, but it does not just cut it, some screenshots of your client at workplace are welcomed

    Read the article

  • Kill overscan for ATI drivers?

    - by joeforker
    I have a dual-boot Windows 7 64 bit/Linux 64 bit machine that uses ATI's Catalyst drivers. Sometimes I attach it to a 1080p LCD TV over HDMI. ATI is daft enough to provide a border to account for overscan. I'm using an LCD TV. No overscan, or it looks like crap because the pixel mapping is not 1:1. How do I disable this driver "feature" in Windows? in Linux?

    Read the article

  • Quality gets worse using ffmpeg and Flash

    - by HOpety
    I have bunch of flash videos and am adding my brand to all of them. The problem is quality gets worse. I am doing with this command: ffmpeg -i /input.flv -vhook "/usr/loca/vhook/drawtext.so -f /usr/share/fonts/somefont.ttf -x 5 -y 5 t MyBrand" -f flv -s 320x240 - | flvtools2 -U stdin /output.flv Please tell me what I am doing wrong. I need the same quality.

    Read the article

  • Not able to install java in ubuntu 9.10

    - by piemesons
    Error i am getting:-- E: I wasn't able to locate a file for the sun-java6-bin package. This might mean you need to manually fix this package. (due to missing arch) then a blue dialog box appears with OK (Which is not clickable and pressing enter is not workin.) EDIT sudo dpkg --configure -a sudo aptitude clean sudo aptitude update sudo aptitude dist-upgrade sudo aptitude install sun-java6-jre sun-java6-plugin sun-java6-fonts Tried this but still not working... same problem.

    Read the article

  • Why do Chinese filenames displays as boxes in Windows 7?

    - by Roddy
    I'm running Windows 7 Professional (UK), and trying to get filenames containing Chinese characters to display correctly in Explorer. I can create Chinese filenames in explorer by pasting text from a webpage or using the Chinese IME to rename files, but the characters just display as boxes (Unicode 'missing character' glyph). The Chinese fonts are installed on the system, and web pages display OK in the browser. In particular, I can see the correct Chinese filenames by pointing chrome at file://C:\, for example.

    Read the article

  • Insert PDF image in MS Word

    - by serhio
    Hello. I have a .doc witch I will convert in PDF. In this .doc I has an image. When I convert the doc to PDF and then zoom it, the images became ugly pixel-ized. I found a tool that converted my bitmap .png image to vectorial .PDF image. Now how could I import the PDF image in MS Word (that finally I will convert to PDF once again)?

    Read the article

  • Quality gets worse

    - by HOpety
    I have bunch of flash videos and am adding my brand to all of them. The problem is quality gets worse. I am doing with this command: ffmpeg -i /input.flv -vhook "/usr/loca/vhook/drawtext.so -f /usr/share/fonts/somefont.ttf -x 5 -y 5 t MyBrand" -f flv -s 320x240 - | flvtools2 -U stdin /output.flv Please tell me what I am doing wrong. I need the same quality.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >