Search Results

Search found 2086 results on 84 pages for 'pixel shader'.

Page 75/84 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • GWT Animation final value is not respected

    - by brad
    I have a FlowPanel that I'm trying to animate back and forth like an iphone nav. (See this post for my original question on how to do this) So I have it "working" with the code shown below. I say working in quotes because I'm finding that my final position of my scroller is not precise and always changes when scrolling. The GWT.log always says the actual values I'm looking for, so for instance with the call below to scrollTo, my GWT.log says: ScrollStart: 0 scrollStop: -246 But when I actually analyze the element in fireBug, its css, left position is never exactly -246px. Sometimes it's off by as much as 10px so my panel has just stopped scrolling before being finished. The worst part is that this nav animates back and forth, so subsequent clicks can really throw it off, and I need pixel perfect positioning otherwise the whole things looks off. I don't even know where to start with debugging this other than what I've already done. Any tips are appreciated. Code to call animation scroller = new Scroller(); scroller.scrollTo(-246,400); Animation Code public class Scroller extends Animation { private FlowPanel scroller; private final Element e; public Scroller(){ scroller = new FlowPanel(); e = scroller.getElement(); } public void scrollTo(int position, int milliseconds) { scrollStart = e.getOffsetLeft(); scrollStop = position; GWT.log("ScrollStart: " + scrollStart + " scrollStop: " + scrollStop); run(milliseconds); } @Override protected void onUpdate(double progress) { double position = scrollStart + (progress * (scrollStop - scrollStart)); e.getStyle().setLeft(position, Style.Unit.PX); } }

    Read the article

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

  • Using CSS max-height on an outer div to force scroll on an inner-div.

    - by Jay Neely
    I have an outer div with a variable height (and max-height) that's set with a specific pixel amount by JavaScript, containing two divs within. The 1st div is intended to hold a variable amount of content, e.g. a list of links. It has no height set. The 2nd div is intended to hold a fixed amount of content, and has a specific height set. Right now, the max-height isn't working. The 1st div keeps growing, even with overflow: auto; set, and pushes the 2nd div below it outside the bounds of the outer div. How can I make it so that when the 1st div gets too large for the outer div to contain both it and the fixed-height 2nd div, the 1st div will start to scroll? Example page: http://thevastdesign.com/scrollTest.html Thanks for any help. I'd appreciate a CSS solution the most, even if it requires some hacks. It only has to work in Firefox 3+, IE8, and IE7. Ideas?

    Read the article

  • How to accommodate for the iPhone 4 screen resolution?

    - by dontWatchMyProfile
    This is a programming question! Read on before you vote to close! According to Apple, the iPhone 4 has a new and better screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.

    Read the article

  • Collision detections and how efficient they are

    - by Shadow
    How exactly do you implement collision detection? What are the costs involved? Do different platforms(c/c++, java, cocoa/iphone, flash, directX) have different optimizations for calculating collisions. And lastly are there libraries available to do this for me, or some that I can just interpret for my platform of choice? As I understand it you would need to loop through the collision map and find the area in question and then compair the input thing(e.g. a sprite) to the type of pixel that is in the questioned area. I understand the very basic idea, but I don't understand the underlying implementation or even a higher level one for that matter. It would seem that this type of detection, or any for that matter, is very costly. Tile map? Bit array? How are these created from an image(I would guess looping and doing stuff)? The reason I ask this question is to get a better understanding of the efficiency behind the scenes and to understand exactly what is going on. Links, references, or examples would be very helpful. I know this question is a bit longwinded so any help or references would be very welcome. Thanks SO!

    Read the article

  • Android - Read PNG image without alpha and decode as ARGB_8888

    - by loki666
    I try to read an image from sdcard (in emulator) and then create a Bitmap image with the BitmapFactory.decodeByteArray method. I set the options: options.inPrefferedConfig = Bitmap.Config.ARGB_8888 options.inDither = false Then I extract the pixels into a ByteBuffer. ByteBuffer buffer = ByteBuffer.allocateDirect(width*height*4) bitmap.copyPixelsToBuffer(buffer) I use this ByteBuffer then in the JNI to convert it into RGB format and want to calculate on it. But always I get false data - I test without modifying the ByteBuffer. Only thing I do is to put it into the native method into JNI. Then cast it into a unsigned char* and convert it back into a ByteBuffer before returning it back to Java. unsigned char* buffer = (unsinged char*)(env->GetDirectBufferAddress(byteBuffer)) jobject returnByteBuffer = env->NewDirectByteBuffer(buffer, length) Before displaying the image I get data back with bitmap.copyPixelsFromBuffer( buffer ) But then it has wrong data in it. My Question is if this is because the image is internally converted into RGB 565 or what is wrong here? ..... Have an answer for it: - yes, it is converted internally to RGB565. Does anybody know how to create such an bitmap image from PNG with ARGB8888 pixel format? If anybody has an idea, it would be great!

    Read the article

  • how to place dropdown list box in jquery grid column

    - by kumar
    Hello friends, I have a jquery grid columns defined like this.. using System; using System.Collections.Generic; using System.Linq; using System.Web; using Trirand.Web.Mvc; using System.Web.UI.WebControls; namespace JQGridMVCExamples.Models { public class OrdersJqGridModel { public OrdersJqGridModel() { OrdersGrid = new JQGrid { Columns = new List<JQGridColumn>() { new JQGridColumn { DataField = "OrderID", Width = 50 }, new JQGridColumn { DataField = "OrderDate", Width = 100, DataFormatString = "{0:d}" }, new JQGridColumn { DataField = "CustomerID", Width = 100 }, new JQGridColumn { DataField = "Freight", Width = 75 }, new JQGridColumn { DataField = "ShipName" } }, Width = Unit.Pixel(640) }; OrdersGrid.ToolBarSettings.ShowRefreshButton = true; } public JQGrid OrdersGrid { get; set; } } } and in the view I am caling like this <div> <%= Html.Trirand().JQGrid(Model.OrdersGrid, "JQGrid1") %> </div> I am getting result perfect.. but for column Freight in the Jquery grid I need to place a dropdown list dynamically for all result rows.. can anyone help me out.. THanks

    Read the article

  • 3.1.3 and 3.2 different behaviour

    - by teo
    I'm using a custom cell in tableView with a UITextField - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier] autorelease]; UITextField *txtField = [[UITextField alloc] initWithFrame:CGRectMake(0, 0, 280, 24)]; txtField.placeholder = @"<Enter Text>"; txtField.textAlignment = UITextAlignmentLeft; txtField.clearButtonMode = UITextFieldViewModeAlways; txtField.autocapitalizationType = UITextAutocapitalizationTypeNone; txtField.autocorrectionType = UITextAutocorrectionTypeNo; [cell.contentView addSubview:txtField]; [txtField release]; } } This works fine and the UITextField covers the cell. When i run this with 3.2 sdk or on the iPad the UITextField isn't aligned properly to the left, overlapping the cell and i have to use a UITextField width of 270 instead of 280 UITextField *txtField = [[UITextField alloc] initWithFrame:CGRectMake(0, 0, 270, 24)]; It seems something is wrong the pixel ratio. How can this be fixed? Is there a way to determine the version of os the device has ( 3.1.2, 3.1.3, 3.2 or maybe even the 4.0) or can it be done another way? Thank you Teo

    Read the article

  • Getting Depth Value on Kinect SDK 1.6

    - by AlexanderPD
    this is my first try on Kinect and Kinect SDK so I'm having a lot of "newbie issues" :) my goal is to point my mouse on the Kinect standard video output and get the depth value. I already have both normal video and depth video outputs by using the 2 "Color Basic-WPF" and "Depth Basic-WPF" samples, and handling mouse events or position is not a problem. In fact i already did all and i already got a depth value, but this value is always HIGHLY imprecise. It jumps from 500 to 4000 by just moving to the next pixel in a plane surface. So.. I'm pretty sure I'm reading the depth value in the wrong way. This is how i read it: short debugValue = depthPixels[x*y].Depth; debug.Text = "X = "+x+", Y = "+y+", value = "+debugValue.ToString(); i know it's pretty out of context, this little piece of code is inside the same SensorDepthFrameReady function in "Depth Basic-WPF"! "x" and "y" are the mouse coordinates and depthPixels is DepthImagePixel[] type, a temporary array filled with the "depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);" instruction. Depth frame is filled here: DepthImageFrame depthFrame = e.OpenDepthImageFrame() the "e" comes from here: private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) and this last one is called here: this.sensor.DepthFrameReady += this.SensorDepthFrameReady; how i must handle that depth value i get? I know the value must be between 800 and 4000 but i get values between about 500 and about 8000. i already google a lot (here on SO too) and i still can't understand if the depth value is 11 or 13 bit. The sdk examples uses shrink this value to 8 bit and this is making even more confusion in my head :(

    Read the article

  • Popup window size in android

    - by Bostjan
    I'm creating a popup window in a listactivity in the event onListItemClick. LayoutInflater inflater = (LayoutInflater) this.getSystemService(Context.LAYOUT_INFLATER_SERVICE); View pop = inflater.inflate(R.layout.popupcontact, null, false); ImageView atnot = (ImageView)pop.findViewById(R.id.aNot); height = pop.getMeasuredHeight(); width = pop.getMeasuredWidth(); Log.e("pw","height: "+String.valueOf(height)+", width: "+String.valueOf(width)); atnot.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { pw.dismiss(); } }); pw = new PopupWindow(pop, width, height, true); // The code below assumes that the root container has an id called 'main' //pw.showAtLocation(v, Gravity.CENTER, 0, 0); pw.showAsDropDown(v, 10, 5); Now, the height and width variables were supposed to be height and width of the layout used for the popup window (popupcontact). But they return 0. I guess that is because the layout isn't rendered yet. Does anyone have a clue, how can I control the size of the popup window without needing to use absolute pixel numbers?

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • GDI+ Rotated sub-image

    - by Andrew Robinson
    I have a rather large (30MB) image that I would like to take a small "slice" out of. The slice needs to represent a rotated portion of the original image. The following works but the corners are empty and it appears that I am taking a rectangular area of the original image, then rotating that and drawing it on an unrotated surface resulting in the missing corners. What I want is a rotated selection on the original image that is then drawn on an unrotated surface. I know I can first rotate the original image to accomplish this but this seems inefficient given its size. Any suggestions? Thanks, public Image SubImage(Image image, int x, int y, int width, int height, float angle) { var bitmap = new Bitmap(width, height); using (Graphics graphics = Graphics.FromImage(bitmap)) { graphics.TranslateTransform(bitmap.Width / 2.0f, bitmap.Height / 2.0f); graphics.RotateTransform(angle); graphics.TranslateTransform(-bitmap.Width / 2.0f, -bitmap.Height / 2.0f); graphics.DrawImage(image, new Rectangle(0, 0, width, height), x, y, width, height, GraphicsUnit.Pixel); } return bitmap; }

    Read the article

  • Javascript/Canvas/Images scaling problem in Firefox

    - by DocTiger
    I have a problem with the context2d's drawImage function. Whenever I scale an image, it gets a dark border of one pixel, which is kind of ugly. That does only happen in Firefox, not in Opera or Webkit. Is this an antialiasing problem? For hours I studied the examples and available documentation without getting rid of it... I couldn't yet try it on another computer so maybe just maybe it's an issue with the graphics hardware/drivers. I have reproduced this effect with this minimal snippet, assuming exp.jpg is sized 200x200 pixels. <html> <body> <canvas id="canvas" width="400" height="400"></canvas> </body> <script type="text/javascript" src="../../media/pinax/js/jquery-1.3.2.min.js"></script> <script type="text/javascript" > context = $('#canvas')[0].getContext('2d'); img = new Image(); img.src = "exp.jpg"; //while (!img.complete); context.drawImage(img, 2,2,199,199); context.drawImage(img, 199,2,199,199); </script> </html>

    Read the article

  • Inconsistent table width when hideing/showing a set of columns

    - by Salman A. Kagzi
    I have got a an HTML table of around 40+ columns. To make this table fit in the screen and have the data in a re presentable format we have section in this table. i.e. there are some column that are always visible and the remainder a made visible when s specific radio button (describing a section) is selected. Each radio button is associated to different number of columns. We show/hide a column by setting/removing "display:none" style in all the cell under that column. This all works Just fine. Now the real problem is with the width of the columns in this table. I cant use fixed with with pixel settings. I have tried using the percentage setting by giving 50% to the always visible part and rest 50% is divided between the column in a section. But I am unable to get a fixed behavior i.e. the size of the table columns across IE & FF. Some columns are just right while some are really huge. How can I get the table to give consistent column width across browsers?

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Vehicle License Plate Detection

    - by Ash
    Hey all Basically for my final project at university, I'm developing a vehicle license plate detection application. Now I consider myself an intermediate programmer, however my mathematics knowledge lacks anything above secondary school, therefore producing detection formulae is basically impossible. I've spend a good amount of time looking up academic papers such as: http://www.scribd.com/doc/266575/Detecting-Vehicle-License-Plates-in-Images http://www.cic.unb.br/~mylene/PI_2010_2/ICIP10/pdfs/0003945.pdf http://www.eurasip.org/Proceedings/Eusipco/Eusipco2007/Papers/d3l-b05.pdf When it comes to the maths, I'm lost. Due to this testing various graphic images proved productive, for example: to However this approach is only catered to that particular image, and if the techniques were applied to different images, I'm sure a different, most likely poorer conversion would occur. I've read about a formula called the bottom hat morphology transform, which according to the first does the following: "Basically, the trans- formation keeps all the dark details of the picture, and eliminates everything else (including bigger dark regions and light regions)." Sadly I can't find much information on this, however the image within the documentation near the end of the report shows it's effectiveness. I'm aware this is complicated and vast, I'd just appreciate a little advice, even in terms of what transformation techniques I should focus on developing, or algorithm regarding edge detection or pixel detection. Few things I need to add Developing in C Sharp Confining the project to UK registration plates only I can basically choose the images to convert as a demonstration Thanks

    Read the article

  • Moving UIScrollView in App

    - by jsetting32
    Currently, I am attempting to create the same effect as Yahoo! Weather's App where the vital day information is at the bottom of the page on the top of a UIScrollView, that's contained by a UIView. I am having a hard time thinking about how this is going to happen or how I should implement this. If the user taps on the top of the UIScrollView which is located near the bottom of the laoded UIView, and starts to scroll up (/), the UIScrollView's frame should be moved to the TOP of the current UIView's frame. So the UIScrollView's y-value should change to UIView's (self.view.frame.origin.y) if the user starts scrolling UP on the UIScrollView which is located on the UIView's y-pixel ~280. Here's what the UIViewController should look like in the beginning of loading the ViewController... Then once the user slides his finger from the bottom to the top of the screen... this should happen........ And when the user scrolls to the top of the UIScrollView with all the content within it... the view should go back to the start picture shown... How is this done? I was thinking several UIGestureRecognizer's and Instantiating the UIscrollview at the lower part of the UIView... _weatherView = [[UIScrollView alloc] initWithFrame:CGRectMake(self.view.frame.origin.x, self.view.frame.origin.y + 250, self.view.bounds.size.width, self.view.bounds.size.height - 44)]; _weatherView.contentSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height * 4); _weatherView.backgroundColor = [UIColor clearColor]; [self.view addSubview:_weatherView]; The adding some UIGestureRecognizer delegate method.... But anyone have any ideas on the UIGestureRecognizer delegate method? And how it should be implemented? I can write the psuedo-code but I am having problems finding the delegate methods :P Thank you!!! ---- Break Time.... :)

    Read the article

  • imageconvolution leaves black dot in the upper left corner

    - by Peter O.
    I'm trying to sharp resized images using this code: imageconvolution($imageResource, array( array( -1, -1, -1 ), array( -1, 16, -1 ), array( -1, -1, -1 ), ), 8, 0); When the transparent PNG image is sharpened, using code above, it appears with a black dot in the upper left corner (I have tried different convolution kernels, but the result is the same). After resizing the image looked OK. 1st image is the original one 2nd image is the sharpened one EDIT: What am I going wrong? I'm using the color retrieved from pixel. $color = imagecolorat($imageResource, 0, 0); imageconvolution($imageResource, array( array( -1, -1, -1 ), array( -1, 16, -1 ), array( -1, -1, -1 ), ), 8, 0); imagesetpixel($imageResource, 0, 0, $color); Is imagecolorat the right function? Or is the position correct? EDIT2: I have changed coordinates, but still no luck. I've check the transparency given by imagecolorat (according to this post). This is the dump: array(4) { red => 0 green => 0 blue => 0 alpha => 127 } Alpha 127 = 100% transparent. Those zeroes might cause the problem...

    Read the article

  • VB.NET trying simple captcha

    - by Pride Grimm
    I'm trying to write a simple captcha program in vb.net. I'm just wanting to make an image from random numbers and display it, check the answer, and then proceed. I'm pretty new to vb.net, so I found some code to generate the information. I will cite the owner when I find it again (http://www.knowlegezone.com/80/article/Technology/Software/Asp-Net/Simple-ASP-NET-CAPTCHA-Tutorial) This is in the onload() of default2.aspx Public Sub returnNumer() Dim num1 As New Random Dim num2 As New Random Dim numQ1 As Integer Dim numQ2 As Integer Dim QString As String numQ1 = num1.Next(10, 15) numQ2 = num2.Next(17, 31) QString = numQ1.ToString + " + " + numQ2.ToString + " = " Session("answer") = numQ1 + numQ2 Dim bitmap As New Bitmap(85, 35) Dim Grfx As Graphics = Graphics.FromImage(bitmap) Dim font As New Font("Arial", 18, FontStyle.Bold, GraphicsUnit.Pixel) Dim Rect As New Rectangle(0, 0, 100, 50) Grfx.FillRectangle(Brushes.Brown, Rect) Grfx.DrawRectangle(Pens.PeachPuff, Rect) ' Border Grfx.DrawString(QString, font, Brushes.Azure, 0, 0) Response.ContentType = "Image/jpeg" bitmap.Save(Response.OutputStream, System.Drawing.Imaging.ImageFormat.Jpeg) bitmap.Dispose() Grfx.Dispose() End Sub So I put this in a separate page, like this This all works find and dandy, but when I get the answer from the session like this Dim literal As String = Convert.ToString(Session("answer")) It's always one behind. So if The images adds to 32, the answer in session isn't 32. But after a refresh (and a new image) the session("answer") will be 32. Is there a way to refresh the session on page 1, after the default2.aspx loads? Is there a better way to do this? I though about trying to run the code all on one page, and trying to set the src of and image to returnNumber(), but I need a bit of help on that one.

    Read the article

  • RetinaJS and LESS : Background image doesn't show on iOS

    - by jidma
    I am trying to make a background image into a retina image using LESS CSS and RetinaJs: in my index.html file : <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=0, minimum-scale=1.0, maximum-scale=1.0"> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> [...] <link type="text/css" rel="stylesheet/less" href="resources/css/retina.less"> <script type="text/javascript" src="resources/js/less-1.3.0.minjs" ></script> [...] </head> <body> [...] <script type="text/javascript" src="resources/js/retina.js"></script> </body> </html> in my retina.less file: .at2x(@path, @w: auto, @h: auto) { background-image: url("@{path}"); @at2x_path: ~`"@{path}".split('.').slice(0, "@{path}".split('.').length - 1).join(".") + "@2x" + "." + "@{path}".split('.')["@{path}".split('.').length - 1]`; @media all and (-webkit-min-device-pixel-ratio : 1.5) { background-image: url("@{at2x_path}"); background-size: @w @h; } } .topMenu { .at2x('../../resources/img/topMenuTitle.png'); } I have both topMenuTitle.png (320px x 40px) and [email protected] (640px x 80px) in the same folder. When test this code: In Firefox i have the normal Background In the XCode iPhone simulator I also have the normal Background In the iPhone device, I don't have any background at all. I'm using GWT if that matters. Any suggestions ? Thanks.

    Read the article

  • Where is the "ListViewItemPlaceholderBackgroundThemeBrush" located?

    - by Dimi Toulakis
    I have a problem understanding one style definition in Windows 8 metro apps. When you create a metro style application with VS, there is also a folder named Common created. Inside this folder there is file called StandardStyles.xaml Now the following snippet is from this file: <!-- Grid-appropriate 250 pixel square item template as seen in the GroupedItemsPage and ItemsPage --> <DataTemplate x:Key="Standard250x250ItemTemplate"> <Grid HorizontalAlignment="Left" Width="250" Height="250"> <Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}"> <Image Source="{Binding Image}" Stretch="UniformToFill"/> </Border> <StackPanel VerticalAlignment="Bottom" Background="{StaticResource ListViewItemOverlayBackgroundThemeBrush}"> <TextBlock Text="{Binding Title}" Foreground="{StaticResource ListViewItemOverlayForegroundThemeBrush}" Style="{StaticResource TitleTextStyle}" Height="60" Margin="15,0,15,0"/> <TextBlock Text="{Binding Subtitle}" Foreground="{StaticResource ListViewItemOverlaySecondaryForegroundThemeBrush}" Style="{StaticResource CaptionTextStyle}" TextWrapping="NoWrap" Margin="15,0,15,10"/> </StackPanel> </Grid> </DataTemplate> What I do not understand here is the static resource definition, e.g. for the Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}" It is not about how you work with templates and binding and resources. Where is this ListViewItemPlaceholderBackgroundThemeBrush located? Many thanks for your help. Dimi

    Read the article

  • Chrome targeted CSS

    - by Chris
    I have some CSS code that hides the cursor on a web page (it is a client facing static screen with no interaction). The code I use to do this is below: *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } Blank.cur is a totally blank cursor file. This code works perfectly well in all browsers when I host the web files on my local server but when I upload to a Windows CE webserver (our production unit) the cursor represents itself as a black box. Odd. After some testing it seems that chrome only has a problem with totally blank cursor files when served from WinCE web server, so I created a blank cursor with one pixel as white, specifically for chrome. How do I then target this CSS rule to chrome specifically? i.e. *, html { cursor: url('/web/resources/graphics/blank.cur'), pointer; } <!--[if CHROME]> *, html { cursor: url('/web/resources/graphics/blankChrome.cur'), pointer; } <![endif]-->

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >