Search Results

Search found 1699 results on 68 pages for 'alpha'.

Page 26/68 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to modify the style of jQuery DatePicker's disabled dates?

    - by Clay
    Given this page: http://jqueryui.com/demos/datepicker/#min-max And viewing its source: http://jqueryui.com/themeroller/css/parseTheme.css.php I can change the following line (using Chrome's inspect element feature) and see those changes reflected: .ui-state-disabled, .ui-widget-content .ui-state-disabled { opacity: .35; filter:Alpha(Opacity=35); background-image: none; } However, if I try to override my own test page with something like... .ui-state-disabled, .ui-widget-content .ui-state-disabled { opacity: .99 !important; filter:Alpha(Opacity=99) !important; background-image: none !important; color:Red !important; } ...I do not see my changes reflected in the calendar. I can make other changes in my own test page and those are reflected for other classes in the datepicker. So, I'm not having any kind of path issue to the .js or .css files. What am I missing here? UPDATE/SOLUTION Firebug to the rescue...this took care of my styling needs: .ui-datepicker-week-end{color: #c0c0c0 !important;} div#ui-datepicker-div.ui-datepicker{color: #c0c0c0;} div#ui-datepicker-div.ui-datepicker:hover{cursor: default important;} .ui-datepicker-calendar th{color: #222222 !important;}

    Read the article

  • Convert bitmap image information into CGImage in iPhone OS 3

    - by giftederic
    I want to create a CGImage with the color information I already have Here is the code for converting the CGImage to CML, CML_color is a matrix structure - (void)CGImageReftoCML:(CGImageRef *)image destination:(CML_color &)dest{ CML_RGBA p; NSUInteger width=CGImageGetWidth(image); NSUInteger height=CGImageGetHeight(image); CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceRGB(); unsigned char *rawData=new unsigned char[height*width*4]; NSUInteger bytesPerPixel=4; NSUInteger bytesPerRow=bytesPerPixel*width; NSUInteger bitsPerComponent=8; CGContextRef context=CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); CGContextRelease(context); int index=0; for (int i=0; i<height; i++) { for (int j=0; j<width; j++) { p.red=rawData[index++]; p.green=rawData[index++]; p.blue=rawData[index++]; p.alpha=rawData[index++]; dest(i,j)=p; } } delete[] rawData; } Now I want the reverse function, which converts CML into CGImage. I know all the color and alpha information to create the image, which stored in the matrix CML, but how can I do that?

    Read the article

  • Flash "visible" issue

    - by justkevin
    I'm writing a tool in Flex that lets me design composite sprites using layered bitmaps and then "bake" them into a low overhead single bitmapData. I've discovered a strange behavior I can't explain: toggling the "visible" property of my layers works twice for each layer (i.e., I can turn it off, then on again) and then never again for that layer-- the layer stays visible from that point on. If I override "set visible" on the layer as such: override public function set visible(value:Boolean):void { if(value == false) this.alpha = 0; else {this.alpha = 1;} } The problem goes away and I can toggle "visibility" as much as I want. Any ideas what might be causing this? Edit: Here is the code that makes the call: private function onVisibleChange():void { _layer.visible = layerVisible.selected; changed(); } The changed() method "bakes" the bitmap: public function getBaked():BitmapData { var w:int = _composite.width + (_atmosphereOuterBlur * 2); var h:int = _composite.height + (_atmosphereOuterBlur * 2); var bmpData:BitmapData = new BitmapData(w,h,true,0x00000000); var matrix:Matrix = new Matrix(); var bounds:Rectangle = this.getBounds(this); matrix.translate(w/2,h/2); bmpData.draw(this,matrix,null,null,new Rectangle(0,0,w,h),true); return bmpData; } Incidentally, while the layer is still visible, using the Flex debugger I can verify that the layer's visible value is "false".

    Read the article

  • Trouble using latex in Matplotlib / Scipy etc.

    - by ajhall
    I'm having some issues with my first attempts at using matplotlib and scipy to make some scatter plots of my data (too many variables, trying to see many things at once). Here's some code of mine that is working fairly well... import numpy from scipy import * import pylab from matplotlib import * import h5py FileID = h5py.File('3DiPVDplot1.mat','r') # (to view the contents of: list(FileID) ) group = FileID['/'] CurrentsArray = group['Currents'].value IvIIIarray = group['IvIII'].value PFarray = group['PF'].value growthTarray = group['growthT'].value fig = pylab.figure() ax = fig.add_subplot(111) cax = ax.scatter(IvIIIarray, growthTarray, PFarray, CurrentsArray, alpha=0.75) cbar = fig.colorbar(cax) ax.set_xlabel('Cu / III') ax.set_ylabel('Growth T') ax.grid(True) pylab.show() I tried to change the code to include latex fonts and interpreting, none of it seems to work for me, however. Here's an example attempt that didn't work: import numpy from scipy import * import pylab from matplotlib import * import h5py rc('text', usetex=True) rc('font', family='serif') FileID = h5py.File('3DiPVDplot1.mat','r') # (to view the contents of: list(FileID) ) group = FileID['/'] CurrentsArray = group['Currents'].value IvIIIarray = group['IvIII'].value PFarray = group['PF'].value growthTarray = group['growthT'].value fig = pylab.figure() ax = fig.add_subplot(111) cax = ax.scatter(IvIIIarray, growthTarray, PFarray, CurrentsArray, alpha=0.75) cbar = fig.colorbar(cax) ax.set_xlabel(r'Cu / III') ax.set_ylabel(r'Growth T') ax.grid(True) pylab.show() I'm using fink installed python26 with corresponding packages for scipy matplotlib etc. I've been using iPython and manual work instead of scripts in python. Since I'm completely new to python and scipy, I'm sure I'm making some stupid simple mistakes. Please enlighten me! I greatly appreciate the help!

    Read the article

  • GWT : Internet Explorer transparency issue

    - by dindeman
    This post concerns only IE. The last line of the following code is causing the issue. int width = 200; int height = 200; int overHeight = 40; AbsolutePanel absPanel = new AbsolutePanel(); absPanel.setSize(width + "px", height + "px"); SimplePanel underPanel = new SimplePanel(); underPanel.setWidth(width + "px"); underPanel.setHeight(height + "px"); underPanel.getElement().getStyle().setBackgroundColor("red"); SimplePanel overPanel = new SimplePanel(); overPanel.setWidth(width + "px"); overPanel.setHeight(overHeight + "px"); overPanel.getElement().getStyle().setBackgroundColor("black"); //Setting the IE opacity to 20% on the black element in order to obtain the see through effect. overPanel.getElement().getStyle().setProperty("filter", "alpha(opacity=20)"); absPanel.add(underPanel, 0, 0); absPanel.add(overPanel, 0, 0); RootPanel.get("test").add(absPanel); //The next line causes the problem. absPanel.getElement().getStyle().setProperty("filter", "alpha(opacity=100)"); So basically this code should display a red square of 200px by 200px (see underPanel in the code) and on top of it a black rectangle of 200px by 40px (see overPanel in the code). However the black rectangle is partially see through since its transparency is set to 20%, therefore it should appear in red, but of a darker red than the rectangle sitting under since it is actually a faded black item. Some rendering problem occurs because of the last line of code that sets the opacity of the containing AbsolutePanel to 100% (which in theory should not affect the visual result). Indeed in that case the panel lying over remains still see through but straight through the background colour of the page! It's like if the panel sitting under was not there at all... Any ideas? This is under GWT 2.0 and IE7.

    Read the article

  • Custom Alignment and Backgrounds Through Greasemonkey

    - by Jivec
    I'm trying to implement something in greasemonkey and it is giving me a fair bit of trouble as I can't get it to work. I frequently use Wolfram Alpha (http://wolframalpha.com) for a lot of things. They have recently updated the home page with a new style. There are settings that you can edit on this page (http://www.wolframalpha.com/homesettings.html) As you would expect when you clear cookies you loose these settings. What I would like to do is have a greasemonky script that sets the background to what ever I like (which will stay also regardless of the state of your cookies). It would also be cool if this background was displayed the whole way through Wolfram Alpha (ie when you make queries too eg. http://www.wolframalpha.com/input/?i=stack+overflow ) The other thing I'm trying to implement but I'm struggling is to force the results pages to be left aligned so that the browser window can be smaller. If anyone could help me with this it would be appreciated, I have tried to do it my self but I'm unsure how to get it to work.

    Read the article

  • UITableViewCell imageView images loading small even when they are the correct size!

    - by Alex Barlow
    Im having an issue whilst loading images into a UITableViewCell after an asynchronous download and placement into an UIImage variable.. The images appear smaller than they actually are! But when scrolled down and scrolled back up to the image, or the whole table is reloaded, they appear at the correct size... Here is a code excerpt... - (void)reviewImageDidLoad:(NSIndexPath *)indexPath { ThumbDownloader *thumbDownloader = [imageDownloadsInProgress objectForKey:indexPath]; if (thumbDownloader != nil) { UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:thumbDownloader.indexPathInTableView]; [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.4]; [self.tableView cellForRowAtIndexPath:indexPath].imageView.alpha = 0.0; [UIView commitAnimations]; cell.imageView.image = thumbDownloader.review.thumb; [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.4]; [self.tableView cellForRowAtIndexPath:indexPath].imageView.alpha = 1.0; [UIView commitAnimations]; } } Here is an image of the app just after calling this method.. http://www.flickr.com/photos/arbarlow/5288563627/ After calling tableView reloadData or scrolling around the appear correctly, go the the next flickr image, to see the normal result, but im sure you can guess that.. Does anyone have an ideas as to make the images appear correctly? im absolutely stumped?! Regards, Alex iPhone noob

    Read the article

  • How to genrate a monochrome bit mask for a 32bit bitmap

    - by Mordachai
    Under Win32, it is a common technique to generate a monochrome bitmask from a bitmap for transparency use by doing the following: SetBkColor(hdcSource, clrTransparency); VERIFY(BitBlt(hdcMask, 0, 0, bm.bmWidth, bm.bmHeight, hdcSource, 0, 0, SRCCOPY)); This assumes that hdcSource is a memory DC holding the source image, and hdcMask is a memory DC holding a monochrome bitmap of the same size (so both are 32x32, but the source is 4 bit color, while the target is 1bit monochrome). However, this seems to fail for me when the source is 32 bit color + alpha. Instead of getting a monochrome bitmap in hdcMask, I get a mask that is all black. No bits get set to white (1). Whereas this works for the 4bit color source. My search-foo is failing, as I cannot seem to find any references to this particular problem. I have isolated that this is indeed the issue in my code: i.e. if I use a source bitmap that is 16 color (4bit), it works; if I use a 32 bit image, it produces the all-black mask. Is there an alternate method I should be using in the case of 32 bit color images? Is there an issue with the alpha channel that overrides the normal behavior of the above technique? Thanks for any help you may have to offer! ADDENDUM: I am still unable to find a technique that creates a valid monochrome bitmap for my GDI+ produced source bitmap. I have somewhat alleviated my particular issue by simply not generating a monochrome bitmask at all, and instead I'm using TransparentBlt(), which seems to get it right (but I don't know what they're doing internally that's any different that allows them to correctly mask the image). It might be useful to have a really good, working function: HBITMAP CreateTransparencyMask(HDC hdc, HBITMAP hSource, COLORREF crTransparency); Where it always creates a valid transparency mask, regardless of the color depth of hSource. Ideas?

    Read the article

  • NSUserDefaults: Saved Number Always 0, iPhone

    - by Stumf
    Hello all, I have looked at other answers and the docs. Maybe I am missing something, or maybe I have another issue. I am trying to save a number on exiting the app, and then when the app is loaded I want to check if this value exists and take action accordingly. This is what I have tried: To save on exiting: - (void)applicationWillTerminate: (UIApplication *) application { double save = [label.text doubleValue]; [[NSUserDefaults standardUserDefaults] setDouble: save forKey: @"savedNumber"]; [[NSUserDefaults standardUserDefaults] synchronize]; } To check: - (IBAction)buttonclickSkip{ double save = [[NSUserDefaults standardUserDefaults] doubleForKey: @"savedNumber"]; if (save == 0) { [self performSelector:@selector(displayAlert) withObject:nil]; test.enabled = YES; test.alpha = 1.0; skip.enabled = NO; skip.alpha = 0.0; } else { label.text = [NSString stringWithFormat:@"%.1f %%", save]; } } The problem is I always get my alert message displayed, the saved value is not put into the label so somehow == 0 is always true. If it makes any difference I am testing this on the iPhone simulator. Many thanks, Stu

    Read the article

  • converting numbers into alphabets.

    - by Nina
    Hi! i want to convert number into alphabets using javascript e.g. 01=n, 02=i 03=n,04=a and when someone enters the numbers:01020304 in the form he will get like this: nina. or whatever he enters it get replace with equivalent alphabets including spaces. i will be really thankful if you can provide full code including html form code as i am beginner. Thank you all for quick response. I have found this code in one site it converts alphabets into numbers, but code for converting numbers into alphabets isn't working. here is a code for converting alphabets into numbers: var i,j; var getc; var len; var num,alpha; num=new Array("01","02","03","04","05","06","07","08","09","10","11","12","13","14","15","16","17", "18","19","20","21","22","23","24","25","26","00","##","$$"); alpha=new Array("a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u"," v","w","x","y","z"," ",".",","); function encode() { len=document.f1.ta1.value.length; document.f1.ta2.value=""; for(i=0;i

    Read the article

  • Correlate GROUP BY and LEFT JOIN on multiple criteria to show latest record?

    - by Sunbird
    In a simple stock management database, quantity of new stock is added and shipped until quantity reaches zero. Each stock movement is assigned a reference, only the latest reference is used. In the example provided, the latest references are never shown, the stock ID's 1,4 should have references charlie, foxtrot respectively, but instead show alpha, delta. How can a GROUP BY and LEFT JOIN on multiple criteria be correlated to show the latest record? http://sqlfiddle.com/#!2/6bf37/107 CREATE TABLE stock ( id tinyint PRIMARY KEY, quantity int, parent_id tinyint ); CREATE TABLE stock_reference ( id tinyint PRIMARY KEY, stock_id tinyint, stock_reference_type_id tinyint, reference varchar(50) ); CREATE TABLE stock_reference_type ( id tinyint PRIMARY KEY, name varchar(50) ); INSERT INTO stock VALUES (1, 10, 1), (2, -5, 1), (3, -5, 1), (4, 20, 4), (5, -10, 4), (6, -5, 4); INSERT INTO stock_reference VALUES (1, 1, 1, 'Alpha'), (2, 2, 1, 'Beta'), (3, 3, 1, 'Charlie'), (4, 4, 1, 'Delta'), (5, 5, 1, 'Echo'), (6, 6, 1, 'Foxtrot'); INSERT INTO stock_reference_type VALUES (1, 'Customer Reference'); SELECT stock.id, SUM(stock.quantity) as quantity, customer.reference FROM stock LEFT JOIN stock_reference AS customer ON stock.id = customer.stock_id AND stock_reference_type_id = 1 GROUP BY stock.parent_id

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • Problem in inferring instances that have integer cardinality constraint

    - by Mikae Combarado
    Hello, I have created an RDF/OWL file using Protege 4.1 alpha. I also created a defined class in Protege called CheapPhone. This class has a restriction which is shown below : (hasPrice some integer[< 350]) Whenever, a price of a phone is below 350, it is inferred as CheapPhone. There is no problem for inferring this in Protege 4.1 alpha. However, I cannot infer this using Jena. I also created a defined class called SmartPhone. This class also has a restriction which is shown below : (has3G value true) and (hasInternet value true) Whenever, a phone has 3G and Internet, it is inferred as SmartPhone. In this situation, there is no problem inferring this in both Protege and Jena. I have started to think that there is a problem in default inference engine of Jena. The code that I use in Java is below : Reasoner reasoner = ReasonerRegistry.getOWLReasoner(); reasoner = reasoner.bindSchema(ontModel); OntModelSpec ontModelSpec = OntModelSpec.OWL_MEM_MINI_RULE_INF; ontModelSpec.setReasoner(reasoner); // Create ontology model with reasoner support // ontModel was created and read before, so I don't share the code in order // not to create garbage here OntModel model = ModelFactory.createOntologyModel(ontModelSpec, ontModel); OntClass sPhone = model.getOntClass(ns + "SmartPhone"); ExtendedIterator s = sPhone.listInstances(); while(s.hasNext()) { OntResource mp = (OntResource)s.next(); System.out.println(mp.getURI()); } This code works perfectly and returns me the instances, but when I change the code below and make it appropriate for CheapPhone, it doesn't return anything. OntClass sPhone = model.getOntClass(ns + "CheapPhone"); Am I doing something wrong ?

    Read the article

  • imageconvolution leaves black dot in the upper left corner

    - by Peter O.
    I'm trying to sharp resized images using this code: imageconvolution($imageResource, array( array( -1, -1, -1 ), array( -1, 16, -1 ), array( -1, -1, -1 ), ), 8, 0); When the transparent PNG image is sharpened, using code above, it appears with a black dot in the upper left corner (I have tried different convolution kernels, but the result is the same). After resizing the image looked OK. 1st image is the original one 2nd image is the sharpened one EDIT: What am I going wrong? I'm using the color retrieved from pixel. $color = imagecolorat($imageResource, 0, 0); imageconvolution($imageResource, array( array( -1, -1, -1 ), array( -1, 16, -1 ), array( -1, -1, -1 ), ), 8, 0); imagesetpixel($imageResource, 0, 0, $color); Is imagecolorat the right function? Or is the position correct? EDIT2: I have changed coordinates, but still no luck. I've check the transparency given by imagecolorat (according to this post). This is the dump: array(4) { red => 0 green => 0 blue => 0 alpha => 127 } Alpha 127 = 100% transparent. Those zeroes might cause the problem...

    Read the article

  • Finding Palindromes in an Array

    - by Jack L.
    For this assignemnt, I think that I got it right, but when I submit it online, it doesn't list it as correct even though I checked with Eclipse. The prompt: Write a method isPalindrome that accepts an array of Strings as its argument and returns true if that array is a palindrome (if it reads the same forwards as backwards) and /false if not. For example, the array {"alpha", "beta", "gamma", "delta", "gamma", "beta", "alpha"} is a palindrome, so passing that array to your method would return true. Arrays with zero or one element are considered to be palindromes. My code: public static void main(String[] args) { String[] input = new String[6]; //{"aay", "bee", "cee", "cee", "bee", "aay"} Should return true input[0] = "aay"; input[1] = "bee"; input[2] = "cee"; input[3] = "cee"; input[4] = "bee"; input[5] = "aay"; System.out.println(isPalindrome(input)); } public static boolean isPalindrome(String[] input) { for (int i=0; i<input.length; i++) { // Checks each element if (input[i] != input[input.length-1-i]){ return false; // If a single instance of non-symmetry } } return true; // If symmetrical, only one element, or zero elements } As an example, {"aay", "bee", "cee", "cee", "bee", "aay"} returns true in Eclipse, but Practice-It! says it returns false. What is going on?

    Read the article

  • additive texture combiner

    - by ivicaa
    I have a problem which is driving me crazy. Enironment: IPHONE, OpenGL ES 1.1 Basically I have a simple GL_COMBINE for vertex color and texture color. glColor4f(0.1f, 0.1f, 0.1f, 0); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PRIMARY_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA); It should simply do VertexColorRGBA + TextureRGBA. With Alpha everything works fine, but if as soon as I change R,G,B in the glColor4f call, the final alpha is also modified. Does anyone have a hint for this unexpected behavior? Thanks in advance! Ivica

    Read the article

  • Send array of string using tcp

    - by user2798455
    I'm not sure if this is possible but here it goes. Usually I send strings like this... Connection.IOHandler.WriteLn('alpha'); Connection.IOHandler.WriteLn('bravo'); Connection.IOHandler.WriteLn('charley'); //and so on.. But what if I want to send it in just one go, just send it all at once. Maybe I could put it on an array of strings then send the array. someStr : array[1..3] of string = ('alpha','bravo','charley');//this could be more ... StrListMem := TMemoryStream.Create; try StrListMem.WriteBuffer(someStr[0], Length(someStr)); StrListMem.Position:=0; Connection.IOHandler.Write(StrListMem, 0, True); finally StrListMem.Free; end; I just have no idea how to this right, maybe somebody can give an example? and how the receiver(client) will read it.

    Read the article

  • rotating a circle in UIView?

    - by senthilmuthu
    HI, i want to draw a circle in DrawRect through context like pie chart(took from tutorial) thorugh UITouch? i have given the code as follows,how can i rotate ? any help please? define PI 3.14159265358979323846 define snapshot_start 360 define snapshot_finish 360 static inline float radians(double degrees) { return degrees * PI / 180; } - (void)drawRect:(CGRect)rect { // Drawing code CGRect parentViewBounds = self.bounds; CGFloat x = CGRectGetWidth(parentViewBounds)/2; CGFloat y = CGRectGetHeight(parentViewBounds)*0.55; // Get the graphics context and clear it CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextClearRect(ctx, rect); // define stroke color CGContextSetRGBStrokeColor(ctx, 1, 1, 1, 1.0); // define line width CGContextSetLineWidth(ctx, 4.0); // need some values to draw pie charts double snapshotCapacity =20; double rawCapacity = 100; double systemCapacity = 1; int offset = 5; double pie1_start = 315.0; double pie1_finish = snapshotCapacity *360.0/rawCapacity; double system_finish = systemCapacity*360.0/rawCapacity; CGContextSetFillColor(ctx, CGColorGetComponents( [[UIColor greenColor] CGColor])); CGContextMoveToPoint(ctx, x+2*offset, y); CGContextAddArc(ctx, x+2*offset, y, 100, radians(snapshot_start), radians(snapshot_start+snapshot_finish), 0); CGContextClosePath(ctx); CGContextFillPath(ctx); // system capacity CGContextSetFillColor(ctx, CGColorGetComponents( [[UIColor colorWithRed:15 green:165/255 blue:0 alpha:1 ] CGColor])); CGContextMoveToPoint(ctx, x+offset,y); CGContextAddArc(ctx, x+offset, y, 100, radians(snapshot_start+snapshot_finish+offset), radians(snapshot_start+snapshot_finish+system_finish), 0); CGContextClosePath(ctx); CGContextFillPath(ctx); /* data capacity */ CGContextSetFillColor(ctx, CGColorGetComponents( [[UIColor colorWithRed:99/255 green:184/255 blue:255/255 alpha:1 ] CGColor])); CGContextMoveToPoint(ctx, x, y); CGContextAddArc(ctx, x, y, 100, radians(snapshot_start+snapshot_finish+system_finish+offset), radians(snapshot_start), 0); CGContextClosePath(ctx); CGContextFillPath(ctx); }

    Read the article

  • C++ polymorphism, function calls

    - by moai
    Okay, I'm pretty inexperienced as a programmer, let alone in C++, so bear with me here. What I wanted to do was to have a container class hold a parent class pointer and then use polymorphism to store a child class object. The thing is that I want to call one of the child class's functions through the parent class pointer. Here's a sort of example of what I mean in code: class SuperClass { public: int x; } class SubClass : public SuperClass { public: void function1() { x += 1; } } class Container { public: SuperClass * alpha; Container(SuperClass& beta) { alpha = beta; } } int main() { Container cont = new Container(new SubClass); } (I'm not sure that's right, I'm still really shaky on pointers. I hope it gets the point across, at least.) So, I'm not entirely sure whether I can do this or not. I have a sneaking suspicion the answer is no, but I want to be sure. If someone has another way to accomplish this sort of thing, I'd be glad to hear it.

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • Getting empty update rectangle in OnPaint after calling InvalidateRect on a layered window

    - by Shawn
    I'm trying to figure out why I've been getting an empty update rectangle when I call InvalidateRect on a transparent window. The idea is that I've drawn something on the window (it gets temporarily switched to have an alpha of 1/255 for the drawing), and then I switch it to full transparent mode (i.e. alpha of 0) in order to interact with the desktop & to be able to move the drawing around the screen on top of the desktop. When I try to move the drawing, I get its bounding rectangle & use it to call InvalidateRect, as such: InvalidateRect(m_hTarget, &winRect, FALSE); I've confirmed that the winRect is indeed correct, and that m_hTarget is the correct window & that its rectangle fully encompasses winRect. I get into the OnPaint handler in the class corresponding to m_hTarget, which is derived from a CWnd. In there, I create a CPaintDC, but when I try to access the update rectangle (dcPaint.m_ps.rcPaint) it's always empty. This rectangle gets passed to a function that determines if we need to update the screen (by using UpdateLayeredWindow in the case of a transparent window). If I hard-code a non-empty rectangle in here, the remaining code works correctly & I am able to move the drawing around the screen. I tried changing the 'FALSE' parameter to 'TRUE' in InvalidateRect, with no effect. I also tried using a standard CDC, and then using BeginPaint/EndPaint method in my OnPaint handler, just to ensure that CPaintDC wasn't doing something odd ... but I got the same results. The code that I'm using was originally designed for opaque windows. If m_hTarget corresponds to an opaque window, the same set of function calls results in the correct (i.e. non-empty) rectangle being passed to OnPaint. Once the window is layered, though, it doesn't seem to work right.

    Read the article

  • How to draw shadows that don't suck?

    - by mystify
    A CAShapeLayer uses a CGPathRef to draw it's stuff. So I have a star path, and I want a smooth drop shadow with a radius of about 15 units. Probably there is some nice functionality in some new iPhone OS versions, but I need to do it myself for a old aged version of 3.0 (which most people still use). I tried to do some REALLY nasty stuff: I created a for-loop and sequentially created like 15 of those paths, transform-scaling them step by step to become bigger. Then assigning them to a new created CAShapeLayer and decreasing it's alpha a little bit on every iteration. Not only that this scaling is mathematically incorrect and sucks (it should happen relative to the outline!), the shadow is not rounded and looks really ugly. That's why nice soft shadows have a radius. The tips of a star shouldn't appear totally sharp after a shadow size of 15 units. They should be soft like cream. But in my ugly solution they're just as s harp as the star itself, since all I do is scale the star 15 times and decrease it's alpha 15 times. Ugly. I wonder how the big guys do it? If you had an arbitrary path, and that path must throw a shadow, how does the algorithm to do that work? Probably the path would have to be expanded like 30 times, point-by-point relative to the tangent of the outline away from the filled part, and just by 0.5 units to have a nice blending. Before I re-invent the wheel, maybe someone has a handy example or link?

    Read the article

  • Creating UIButton using helper method

    - by ddawber
    I have a subclass of UITableView and in it I want to generate number of labels that share the same properties (font, textColor, backgroundColor, etc.). I decided the easiest way to achieve this would be to create a helper method which creates the label with some common properties set: - (UILabel *)defaultLabelWithFrame:(CGRect)frame { UILabel *label = [[UILabel alloc] initWithFrame:frame]; label.font = [UIFont fontWithName:@"Helvetica" size:14]; label.textColor = [UIColor colorWithWhite:128.0/255.0 alpha:1.0]; label.backgroundColor = [UIColor clearColor]; return label; } I use the method like this: UILabel *someLabel = [self defaultLabelWithFrame:CGRectMake(0,0,100,100)]; [self addSubview:someLabel]; [someLabel release]; My concern here is that when creating the label in the method it is retained, but when I then assign it to someLabel, it is retained again and I have no way of releasing the memory when created in the method. What would be best the best approach here? I fee like I have two options: Create a subclass of UILabel for the default label type. Create an NSMutableArray called defaultLabels and store the labels in this: - (UILabel *)defaultLabelWithFrame:(CGRect)frame { UILabel *label = [[UILabel alloc] initWithFrame:frame]; label.font = [UIFont fontWithName:@"Helvetica" size:14]; label.textColor = [UIColor colorWithWhite:128.0/255.0 alpha:1.0]; label.backgroundColor = [UIColor clearColor]; [defaultLabels addObject:label]; [labels release]; //I can release here return [defaultLabels lastObject]; //I can release defaultLabels when done } I appreciate your thoughts. Cheers.

    Read the article

  • overridePendingTransition doesn't work

    - by Ixx
    Have found already some people asking the same, but the solutions didn't work for me. I see no animation. Calling it this way: Intent intent = new Intent(this, MyActivity.class); startActivity(intent); overridePendingTransition(R.anim.fadein, R.anim.fadeout); fadein.xml and fadeout.xml are in the anim folder: fadein.xml: <?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android" > <alpha android:duration="1000" android:fromAlpha="0.0" android:interpolator="@android:anim/accelerate_interpolator" android:toAlpha="1.0" /> </set> fadeout.xml: <?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android"> <alpha android:duration="1000" android:fromAlpha="1.0" android:interpolator="@android:anim/accelerate_interpolator" android:toAlpha="0.0" /> </set> Using min. API 7: manifest: <uses-sdk android:minSdkVersion="7"/> API 7 is also in my project.properties file: target=android-7 What am I doing wrong? P.D. Removing the lines with the interpolator doesn't change anything. Already seen / tried: overridePendingTransition doesn't work overridePendingTransition does not work when flag_activity_reorder_to_front is used Fade in Activity from previous Activity in Android Fade in Activity from previous Activity in Android Activity transition in Android

    Read the article

  • cocos2d - how to draw a bottle sprite with dynamically changing water level

    - by Oliver
    I am trying to draw a (2d) sprite in cocos2d showing a bottle. The bottle shall be able to have a dynamic water level (i.e. the amount of water in the bottle can change over the lifetime of the sprite). I am wondering how to do this. I currently have a PNG file of the empty bottle. I adjusted the alpha channel of that PNG so when rendering the sprite I can draw a blue rectangle and render the bottle texture over it. That will give the impression of the water being inside the bottle. However, the bottle's shape is not a rectangle itself of course, so the water can be seen out of the bounds of the bottle. I can change the bottle image in a way that only the bottle itself is transparent and set the "outside world" to an opaque color & alpha channel value, but that again prevents the "world background" to be visible in that area. I simply don't have a clue how to realize this in a sane manner. Do I really have to read every pixel of the bottle image, identify which pixel is "inside" of the bottle and then draw the water pixel by pixel? There must be an easier way, right? ;) Any best practices for these kinds of tasks? edit: see picture below, to make somewhat clearer, what I am talking about ;) http://i47.tinypic.com/10rqww0.png

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >