Search Results

Search found 3171 results on 127 pages for 'pixel colors'.

Page 31/127 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How Did we get from CLI to Graphics?

    - by Nathaniel Bennett
    I'm confused when looking into graphics - specifically with operating systems. I mean, how can a computer render a CLI/console along with a GUI. GUI's are completely different from Text. and How Can we have GUI windows that Display Text interfaces, ie how can we have CLI in modern Graphics Operating system - that's what I'm mainly trying to grip on to. How Do Graphic's get rendered to display? is there some sort of memory address that a GPU access which holds all pixel data, and there system's within OS's that Gather the pixel position of Windows and Widgets, along with the Z Index and rasterize them to that memory address, which then the GPU loads to the screen? How About the CLI's integrated with Graphics? how does the OS Tell the GPU that a certain part of the screen wants to display text while the rest, whats to display pixel data? it's all very confusing. Shed some light in it, will ya?

    Read the article

  • Just too bright

    - by Bunch
    Like a lot of folks I am using SSMS and VS pretty much all day. But staring at the text on the stark white background can be a bit much for my eyes after a while. I have seen quite a few different “themes” for these apps which change all the colors around to make it easier on your eyes. Some of them are pretty cool but all I really wanted was to dim the background a little not radically change the way everything looked. Since the stock colors for comments, breakpoints, keywords and the like are so familiar I wanted a background that did not interfere with those colors. So I picked the following custom color for the item background. It comes off as a parchment type color. Hue: 42        Red: 244 Sat: 123    Green: 245 Lum: 221    Blue: 224

    Read the article

  • How to structure well my adwords campaign?

    - by Romain Dorange
    I am starting an adwords campaigns and I will measure conversion rates using the Adwords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations. two differents ads with different landing URL and messages : one with a focus on the product / the other will contains a discount embedded in the URL 4 differents groups/thematics of keywords I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Am I right ? Also, what are the KPI I can have from an adwords campaign tu measure global effectiveness? measure of ROI from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure of ROI from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign Thanks a lot.

    Read the article

  • Outline Shader Effect for Orthogonal Geometry in XNA

    - by Griffin
    I just recently started learning the art of shading, but I can't give an outline width to 2D, concave geometry when restrained to a single vertex/pixel shader technique (thanks to XNA). the shape I need to give an outline to has smooth, per-vertex coloring, as well as opacity. The outline, which has smooth, per-vertex coloring, variable width, and opacity cannot interfere with the original shape's colors. A pixel depth border detection algorithm won't work because pixel depth isn't a 3.0 semantic. expanding geometry / redrawing won't work because it interferes with the original shape's colors. I'm wondering if I can do something with the stencil/depth buffer outside of the shader functions since I have access to that through the graphics device. But I don't believe I'm able to manipulate actual values. How might I do this?

    Read the article

  • How to structure my AdWords campaign for testing and different groups of keywords?

    - by Romain Dorange
    I am starting an AdWords campaigns and I will measure conversion rates using the AdWords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations: Two different ads with different landing URL and messages: one with a focus on the product / the other will contains a discount embedded in the URL. 4 different groups or themes of keywords. I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Also, what are the key performance indicators that I can have from an AdWords campaign to measure global effectiveness? measure of return on investment from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure o return on investment from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • What is a good method for coloring textures based on a palette in XNA?

    - by Bob
    I've been trying to work on a game with the look of an 8-bit game using XNA, specifically using the NES as a guide. The NES has a very specific palette and each sprite can use up to 4 colors from that palette. How could I emulate this? The current way I accomplish this is I have a texture with defined values which act as indexes to an array of colors I pass to the GPU. I imagine there must be a better way than this, but maybe this is the best way? I don't want to simply make sure I draw every sprite with the right colors because I want to be able to dynamically alter the palette. I'd also prefer not to alter the texture directly using the CPU.

    Read the article

  • Great site for creating color schemes

    - by Nick Harrison
    I have recently discovered a website that is a must have for any developer who has struggled with picking the colors for their web site:  http://colorschemedesigner.com/ You get several choices to determine how to specify the colors. This option brings in a complimentary color in various shades along with the main color that you select on the color wheel. You also have the option of specifying various adjustments to the schema. You can make it pastel, more contrast, less contrast, gray tones, etc. You can also view a preview page as a light page or a dark page to see how the colors might be applied.  Once you have everything the way you want it, you can switch over to the final tab and get the color list. Now none of us have an excuse for questionable color combinations.

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • Image Erosion for face detection in C#

    - by Chris Dobinson
    Hi, I'm trying to implement face detection in C#. I currently have a black + white outline of a photo with a face within it (Here). However i'm now trying to remove the noise and then dilate the image in order to improve reliability when i implement the detection. The method I have so far is here: unsafe public Image Process(Image input) { Bitmap bmp = (Bitmap)input; Bitmap bmpSrc = (Bitmap)input; BitmapData bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); int stride = bmData.Stride; int stride2 = bmData.Stride * 2; IntPtr Scan0 = bmData.Scan0; byte* p = (byte*)(void*)Scan0; int nOffset = stride - bmp.Width * 3; int nWidth = bmp.Width - 2; int nHeight = bmp.Height - 2; var w = bmp.Width; var h = bmp.Height; var rp = p; var empty = CompareEmptyColor; byte c, cm; int i = 0; // Erode every pixel for (int y = 0; y < h; y++) { for (int x = 0; x < w; x++, i++) { // Middle pixel cm = p[y * w + x]; if (cm == empty) { continue; } // Row 0 // Left pixel if (x - 2 > 0 && y - 2 > 0) { c = p[(y - 2) * w + (x - 2)]; if (c == empty) { continue; } } // Middle left pixel if (x - 1 > 0 && y - 2 > 0) { c = p[(y - 2) * w + (x - 1)]; if (c == empty) { continue; } } if (y - 2 > 0) { c = p[(y - 2) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y - 2 > 0) { c = p[(y - 2) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y - 2 > 0) { c = p[(y - 2) * w + (x + 2)]; if (c == empty) { continue; } } // Row 1 // Left pixel if (x - 2 > 0 && y - 1 > 0) { c = p[(y - 1) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y - 1 > 0) { c = p[(y - 1) * w + (x - 1)]; if (c == empty) { continue; } } if (y - 1 > 0) { c = p[(y - 1) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y - 1 > 0) { c = p[(y - 1) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y - 1 > 0) { c = p[(y - 1) * w + (x + 2)]; if (c == empty) { continue; } } // Row 2 if (x - 2 > 0) { c = p[y * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0) { c = p[y * w + (x - 1)]; if (c == empty) { continue; } } if (x + 1 < w) { c = p[y * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w) { c = p[y * w + (x + 2)]; if (c == empty) { continue; } } // Row 3 if (x - 2 > 0 && y + 1 < h) { c = p[(y + 1) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y + 1 < h) { c = p[(y + 1) * w + (x - 1)]; if (c == empty) { continue; } } if (y + 1 < h) { c = p[(y + 1) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y + 1 < h) { c = p[(y + 1) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y + 1 < h) { c = p[(y + 1) * w + (x + 2)]; if (c == empty) { continue; } } // Row 4 if (x - 2 > 0 && y + 2 < h) { c = p[(y + 2) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y + 2 < h) { c = p[(y + 2) * w + (x - 1)]; if (c == empty) { continue; } } if (y + 2 < h) { c = p[(y + 2) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y + 2 < h) { c = p[(y + 2) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y + 2 < h) { c = p[(y + 2) * w + (x + 2)]; if (c == empty) { continue; } } // If all neighboring pixels are processed // it's clear that the current pixel is not a boundary pixel. rp[i] = cm; } } bmpSrc.UnlockBits(bmData); return bmpSrc; } As I understand it, in order to erode the image (and remove the noise), we need to check each pixel to see if it's surrounding pixels are black, and if so, then it is a border pixel and we need not keep it, which i believe my code does, so it is beyond me why it doesn't work. Any help or pointers would be greatly appreciated Thanks, Chris

    Read the article

  • Stable random color algorithm

    - by Olmo
    Here we have an interesting real-world algorithm requirement involving colors. 1) Nice random colors: In ordeeing to draw a beautifull chart (i.e: pie chart) we need to pick a random set of Colors that: a) are different enought b) Play nicely Doesnt Look hard. For example u fix bright and saturation and divide hue in steps of 360/Num_sectors 2) Stable: given Pie1 with sectors with labes ('A','B','C') and Pie2 with sector with labels ('B','C','D'), will be nice if color('B',pie1)= color('B',pie2) and the same for 'C' and so on, so people don't get crazy when seeing similar updated charts, even if some sectors appear some dissapeared or the number of sectors changed. The label is the only stable thing. 3) hard-coded colors: the algorithm allows hardcoded label-color relationships as an input but stills doing a good work (1 & 2) for the rest of free labels. I think this algorithm, even if it looks quite ad-hoc, will be usefull in more then one situation. Any ideas?

    Read the article

  • Sort algorithm with fewest number of operations

    - by luvieere
    What is the sort algorithm with fewest number of operations? I need to implement it in HLSL as part of a pixel shader effect v2.0 for WPF, so it needs to have a really small number of operations, considering Pixel Shader's limitations. I need to sort 9 values, specifically the current pixel and its neighbors.

    Read the article

  • 9patch border issues

    - by synic
    Is there a trick to making 9patch images with single pixel borders? I'm not talking about the single pixel black borders that you use to define the stretchable and content areas. I'm talking about the image itself. If I create an image that has a 1 pixel border around it, often times android will pick one of the edges and stretch it to 2 (or sometimes more) pixels, even though I specifically left the edges out of the stretchable area in the draw9patch utility.

    Read the article

  • WPF, ShowGridLines equivalent for wrap panel

    - by user275587
    I need to display a 1 pixel wide border around all wrap panel cells, kinda like excel grid. Unfortunately the wrap panel does not implement the grid ShowGridLines property. I can't put a border inside every cell because adjacent cells will have a 2 pixel border instead of 1 pixel. Since the wrap panel arranges it's layout dynamically and does not expose it's properties I can't evaluate the correct value for a border inside a cell. Any workaround possible?

    Read the article

  • How i find the greatest number from six or more digit number?

    - by Rajendra Bhole
    Hi,thanks in advance, I have the code in which i want to find out the greatest number from six numbers, the code as follows, if( (pixels->r == 244 || pixels->g == 242 || pixels->b == 245) || (pixels->r == 236 || pixels->g == 235 || pixels->b == 233) || (pixels->r == 250 || pixels->g == 249 || pixels->b == 247) || (pixels->r == 253 || pixels->g == 251 || pixels->b == 230) || (pixels->r == 253 || pixels->g == 246 || pixels->b == 230) || (pixels->r == 254 || pixels->g == 247 || pixels->b == 229)) { numberOfPixels1++; NSLog( @"Pixel data1 %d", numberOfPixels1); } if( (pixels->r == 250 || pixels->g == 240 || pixels->b == 239) ||(pixels->r == 243 || pixels->g == 234 || pixels->b == 229) || (pixels->r == 244 || pixels->g == 241 || pixels->b == 234) || (pixels->r == 251 || pixels->g == 252 || pixels->b == 244) || (pixels->r == 252 || pixels->g == 248 || pixels->b == 237) || (pixels->r == 254 || pixels->g == 246 || pixels->b == 225)) { numberOfPixels2++; NSLog( @"Pixel data2 %d", numberOfPixels2); } if( (pixels->r == 255 || pixels->g == 249 || pixels->b == 225) ||(pixels->r == 255 || pixels->g == 249 || pixels->b == 225) || (pixels->r == 241 || pixels->g == 231 || pixels->b == 195) || (pixels->r == 239 || pixels->g == 226 || pixels->b == 173) || (pixels->r == 224 || pixels->g == 210 || pixels->b == 147) || (pixels->r == 242 || pixels->g == 226 || pixels->b == 151)) { numberOfPixels3++; NSLog( @"Pixel data3 %d", numberOfPixels3); } if( (pixels->r == 235 || pixels->g == 214 || pixels->b == 159) ||(pixels->r == 235 || pixels->g == 217 || pixels->b == 133) || (pixels->r == 227 || pixels->g == 196 || pixels->b == 103) || (pixels->r == 225 || pixels->g == 193 || pixels->b == 106) || (pixels->r == 223 || pixels->g == 193 || pixels->b == 123) || (pixels->r == 222 || pixels->g == 184 || pixels->b == 119)) { numberOfPixels4++; NSLog( @"Pixel data4 %d", numberOfPixels4); } if( (pixels->r == 199 || pixels->g == 164 || pixels->b == 100) ||(pixels->r == 188 || pixels->g == 151 || pixels->b == 98) || (pixels->r == 156 || pixels->g == 107 || pixels->b == 67) || (pixels->r == 142 || pixels->g == 88 || pixels->b == 62) || (pixels->r == 121 || pixels->g == 77 || pixels->b == 48) || (pixels->r == 100 || pixels->g == 49 || pixels->b == 22)) { numberOfPixels5++; NSLog( @"Pixel data5 %d", numberOfPixels5); } if( (pixels->r == 101 || pixels->g == 48 || pixels->b == 32) ||(pixels->r == 96 || pixels->g == 49 || pixels->b == 33) || (pixels->r == 87 || pixels->g == 50 || pixels->b == 41) || (pixels->r == 64 || pixels->g == 32 || pixels->b == 21) || (pixels->r == 49 || pixels->g == 37 || pixels->b == 41) || (pixels->r == 27 || pixels->g == 28 || pixels->b == 46)) { numberOfPixels6++; NSLog( @"Pixel data6 %d", numberOfPixels6); } I have to find out greatest from numberOfPixels1....numberOfPixels6 from above code. There are any optimum way to find out the greatest number?

    Read the article

  • Adding data (not only text) to a multi column ListView (WPF)

    - by user811804
    I am working on a WPF application in C# (.NET 4.0) where I have a ListView with a GridView that has two columns. I dynamically want to add rows (in code). My dilemma is that only the first column will have regular text added to it. The second column will have an object that includes a multi column Grid with TextBlocks. (see link http://imageshack.us/photo/my-images/803/listview.png/) If I do what you would normally do when you want to enter text in all columns (ie. DisplayMemberBinding) all I get in the second column is the text "System.Windows.Grid", which obviously isn't what I want. For reference if I just try to add the Grid object (with the TextBlocks) with the code listView1.Items.Add(grid1) (not using DisplayMemberBinding) the object gets added to the second column only (with the first column being blank) and not how it normally works with text where the same text ends up in all columns. I hope my question is detailed enough and any help with this would be much appreciated. EDIT: I have tried the following code, howeever every time I click the button to add a new row every single row gets updated with the same datatemplate. (ie. the second column always shows the same data on every row.) xaml: <Window x:Class="TEST.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Name="AAA" Title="MainWindow" Height="350" Width="525" Loaded="Window_Loaded"> <Grid Name="grid1"> <Grid.ColumnDefinitions> <ColumnDefinition Width="374*" /> <ColumnDefinition Width="129*" /> </Grid.ColumnDefinitions> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="21,12,0,0" Name="button1" VerticalAlignment="Top" Width="75" Grid.Column="1" Click="button1_Click" /> </Grid> code: public partial class MainWindow : Window { ListView listView1 = new ListView(); GridViewColumn viewCol2 = new GridViewColumn(); public MainWindow() { InitializeComponent(); Style style = new Style(typeof(ListViewItem)); style.Setters.Add(new Setter(ListViewItem.HorizontalContentAlignmentProperty, HorizontalAlignment.Stretch)); listView1.ItemContainerStyle = style; GridView gridView1 = new GridView(); listView1.View = gridView1; GridViewColumn viewCol1 = new GridViewColumn(); viewCol1.Header = "Option"; gridView1.Columns.Add(viewCol1); viewCol2.Header = "Value"; gridView1.Columns.Add(viewCol2); grid1.Children.Add(listView1); viewCol1.DisplayMemberBinding = new Binding("Option"); } private void Window_Loaded(object sender, RoutedEventArgs e) { } private void button1_Click(object sender, RoutedEventArgs e) { DataTemplate dataTemplate = new DataTemplate(); FrameworkElementFactory spFactory = new FrameworkElementFactory(typeof(Grid)); Random random = new Random(); int cols = random.Next(1, 6); int full = 100; for (int i = 0; i < cols; i++) { FrameworkElementFactory col1 = new FrameworkElementFactory(typeof(ColumnDefinition)); int partWidth = random.Next(0, full); full -= partWidth; col1.SetValue(ColumnDefinition.WidthProperty, new GridLength(partWidth, GridUnitType.Star)); spFactory.AppendChild(col1); } if (full > 0) { FrameworkElementFactory col1 = new FrameworkElementFactory(typeof(ColumnDefinition)); col1.SetValue(ColumnDefinition.WidthProperty, new GridLength(full, GridUnitType.Star)); spFactory.AppendChild(col1); } for (int i = 0; i < cols; i++) { FrameworkElementFactory text1 = new FrameworkElementFactory(typeof(TextBlock)); SolidColorBrush sb1 = new SolidColorBrush(); switch (i) { case 0: sb1.Color = Colors.Blue; break; case 1: sb1.Color = Colors.Red; break; case 2: sb1.Color = Colors.Yellow; break; case 3: sb1.Color = Colors.Green; break; case 4: sb1.Color = Colors.Purple; break; case 5: sb1.Color = Colors.Pink; break; case 6: sb1.Color = Colors.Brown; break; } text1.SetValue(TextBlock.BackgroundProperty, sb1); text1.SetValue(Grid.ColumnProperty, i); spFactory.AppendChild(text1); } if (full > 0) { FrameworkElementFactory text1 = new FrameworkElementFactory(typeof(TextBlock)); SolidColorBrush sb1 = new SolidColorBrush(Colors.Black); text1.SetValue(TextBlock.BackgroundProperty, sb1); text1.SetValue(Grid.ColumnProperty, cols); spFactory.AppendChild(text1); } dataTemplate.VisualTree = spFactory; viewCol2.CellTemplate = dataTemplate; int rows = listView1.Items.Count + 1; listView1.Items.Add(new { Option = "Row " + rows }); } }

    Read the article

  • How to select records as columns in SQL

    - by Leigh
    Hi, I have two tables: tblSizes and tblColors. tblColors has columns called ColorName, ColorPrice and SizeID. There is one size to multiple colors. I need to write a query to select the size and all the colors (as columns) for a that size with the price of each size in its respective column. The colors must be returned as columns, for instance: SizeID : Width : Height : Red : Green : Blue 1---------220-----220----£15----£20-----£29 Hope this makes sense Thank you

    Read the article

  • jQuery Color **Swatch** Picker

    - by Alan Storm
    Has anyone coded up a jQuery query color picker that lets you pick colors from a predetermined list of colors? Something like a product color picker on an Ecommerce site. Most of the searching I've done reveals a lot of general purpose, pick-any-color-in-the-rgb-spectrum, but few options for picking specific colors.

    Read the article

  • CreatePatternBrush and screen color depth

    - by Carlos Alloatti
    I am creating a brush using CreatePatternBrush with a bitmap created with CreateBitmap. The bitmap is 1 pixel wide and 24 pixels tall, I have the RGB value for each pixel, so I create an array of rgbquads and pass that to CreateBitmap. This works fine when the screen color depth is 32bpp, since the bitmap I create is also 32bpp. When the screen color depth is not 32bpp, this fails, and I understand why it does, since I should be creating a compatible bitmap instead. It seems I should use CreateCompatibleBitmap instead, but how do I put the pixel data I have into that bitmap? I have also read about CreateDIBPatternBrushPt, CreateDIBitmap, CreateDIBSection, etc. I don´t understand what is a DIBSection, and find the subject generally confusing. I do understand that I need a bitmap with the same color depth as the screen, but how do I create it having only the 32bpp pixel data?

    Read the article

  • how to pass arguments into function within a function in r

    - by jon
    I am writing function that involve other function from base R with alot of arguments. For example (real function is much longer): myfunction <- function (dataframe, Colv = NA) { matrix <- as.matrix (dataframe) out <- heatmap(matrix, Colv = Colv) return(out) } data(mtcars) myfunction (mtcars, Colv = NA) The heatmap has many arguments that can be passed to: heatmap(x, Rowv=NULL, Colv=if(symm)"Rowv" else NULL, distfun = dist, hclustfun = hclust, reorderfun = function(d,w) reorder(d,w), add.expr, symm = FALSE, revC = identical(Colv, "Rowv"), scale=c("row", "column", "none"), na.rm = TRUE, margins = c(5, 5), ColSideColors, RowSideColors, cexRow = 0.2 + 1/log10(nr), cexCol = 0.2 + 1/log10(nc), labRow = NULL, labCol = NULL, main = NULL, xlab = NULL, ylab = NULL, keep.dendro = FALSE, verbose = getOption("verbose"), ...) I want to use these arguments without listing them inside myfun. myfunction (mtcars, Colv = NA, col = topo.colors(16)) Error in myfunction(mtcars, Colv = NA, col = topo.colors(16)) : unused argument(s) (col = topo.colors(16)) I tried the following but do not work: myfunction <- function (dataframe, Colv = NA) { matrix <- as.matrix (dataframe) out <- heatmap(matrix, Colv = Colv, ....) return(out) } data(mtcars) myfunction (mtcars, Colv = NA, col = topo.colors(16))

    Read the article

  • C++ converting binary(P5) image to ascii(P2) image (.pgm)

    - by tubby
    I am writing a simple program to convert grayscale binary (P5) to grayscale ascii (P2) but am having trouble reading in the binary and converting it to int. #include <iostream> #include <fstream> #include <sstream> using namespace::std; int usage(char* arg) { // exit program cout << arg << ": Error" << endl; return -1; } int main(int argc, char* argv[]) { int rows, cols, size, greylevels; string filetype; // open stream in binary mode ifstream istr(argv[1], ios::in | ios::binary); if(istr.fail()) return usage(argv[1]); // parse header istr >> filetype >> rows >> cols >> greylevels; size = rows * cols; // check data cout << "filetype: " << filetype << endl; cout << "rows: " << rows << endl; cout << "cols: " << cols << endl; cout << "greylevels: " << greylevels << endl; cout << "size: " << size << endl; // parse data values int* data = new int[size]; int fail_tracker = 0; // find which pixel failing on for(int* ptr = data; ptr < data+size; ptr++) { char t_ch; // read in binary char istr.read(&t_ch, sizeof(char)); // convert to integer int t_data = static_cast<int>(t_ch); // check if legal pixel if(t_data < 0 || t_data > greylevels) { cout << "Failed on pixel: " << fail_tracker << endl; cout << "Pixel value: " << t_data << endl; return usage(argv[1]); } // if passes add value to data array *ptr = t_data; fail_tracker++; } // close the stream istr.close(); // write a new P2 binary ascii image ofstream ostr("greyscale_ascii_version.pgm"); // write header ostr << "P2 " << rows << cols << greylevels << endl; // write data int line_ctr = 0; for(int* ptr = data; ptr < data+size; ptr++) { // print pixel value ostr << *ptr << " "; // endl every ~20 pixels for some readability if(++line_ctr % 20 == 0) ostr << endl; } ostr.close(); // clean up delete [] data; return 0; } sample image - Pulled this from an old post. Removed the comment within the image file as I am not worried about this functionality now. When compiled with g++ I get output: $> ./a.out a.pgm filetype: P5 rows: 1024 cols: 768 greylevels: 255 size: 786432 Failed on pixel: 1 Pixel value: -110 a.pgm: Error The image is a little duck and there's no way the pixel value can be -110...where am I going wrong? Thanks.

    Read the article

  • How to create a paging scrollView with space between views

    - by user1558819
    I followed the tutorial about how to create a scrollView Page Control: http://www.iosdevnotes.com/2011/03/uiscrollview-paging/ This tutorial is really good and I implement well the code. Here my question: I want to put a space between the my PageViews, but when it change the page it show the space between the views in the next page. The scroll must stop after the space when I change the page. Someone can help me? I change the tutorial code here: - (void)viewDidLoad { [super viewDidLoad]; NSArray *colors = [NSArray arrayWithObjects:[UIColor redColor], [UIColor greenColor], [UIColor blueColor], nil]; #define space 20 for (int i = 0; i < colors.count; i++) { CGRect frame; frame.origin.x = (self.scrollView.frame.size.width + space) * i; frame.origin.y = 0; frame.size = self.scrollView.frame.size; UIView *subview = [[UIView alloc] initWithFrame:frame]; subview.backgroundColor = [colors objectAtIndex:i]; [self.scrollView addSubview:subview]; [subview release]; } self.scrollView.contentSize = CGSizeMake(self.scrollView.frame.size.width * colors.count+ space*(colors.count-1), self.scrollView.frame.size.height); } Thanks,

    Read the article

  • Mysql query: count and distinct

    - by Azevedo
    I have this hypotetical MySQL table: TABLE cars: (year, color, brand) How do I count # of cars grouped by brand and year? I mean, how can I get to know how many colors are there grouped by brand like: brand "GM": total # of colors: 8. brand "GM": total # of years (grouped together): 14 (meaning there are count of 14 different years). brand "TOYOTA": total # of colors: 3. brand "TOYOTA": total # of years (grouped together): 10 (meaning there are count of 14 different years) I tried playing some queries with COUNT, DISTINCT, GROUP BY but I can't get to it. Actually I'm trying to get the 2 queries... +-------+---------------+ | brand | count(colors) | +-------+---------------+ +-------+--------------+ | brand | count(years) | +-------+--------------+ thanks a lot!

    Read the article

  • How i store the images pixels in matrix form?

    - by Rajendra Bhole
    Hi, I developing an application in which the pixelize image i want to be store in matrix format. The code is as follows. struct pixel { //unsigned char r, g, b,a; Byte r, g, b; int count; }; (NSInteger) processImage1: (UIImage*) image { // Allocate a buffer big enough to hold all the pixels struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel)); if (pixels != nil) { // Create a new bitmap CGContextRef context = CGBitmapContextCreate( (void*) pixels, image.size.width, image.size.height, 8, image.size.width * 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast ); NSLog(@"1=%d, 2=%d, 3=%d", CGImageGetBitsPerComponent(image), CGImageGetBitsPerPixel(image),CGImageGetBytesPerRow(image)); if (context != NULL) { // Draw the image in the bitmap CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage); NSUInteger numberOfPixels = image.size.width * image.size.height; I confusing about how to initialize the 2-D matrix in which the matrix store data of pixels.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >