Search Results

Search found 4287 results on 172 pages for 'frame'.

Page 82/172 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Conditionally colour data points outside of confidence bands in R

    - by D W
    I need to colour datapoints that are outside of the the confidence bands on the plot below differently from those within the bands. Should I add a separate column to my dataset to record whether the data points are within the confidence bands? Can you provide an example please? Example dataset: ## Dataset from http://www.apsnet.org/education/advancedplantpath/topics/RModules/doc1/04_Linear_regression.html ## Disease severity as a function of temperature # Response variable, disease severity diseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4) # Predictor variable, (Centigrade) temperature<-c(2,1,5,5,20,20,23,10,30,25) ## For convenience, the data may be formatted into a dataframe severity <- as.data.frame(cbind(diseasesev,temperature)) ## Fit a linear model for the data and summarize the output from function lm() severity.lm <- lm(diseasesev~temperature,data=severity) jpeg('~/Desktop/test1.jpg') # Take a look at the data plot( diseasesev~temperature, data=severity, xlab="Temperature", ylab="% Disease Severity", pch=16, pty="s", xlim=c(0,30), ylim=c(0,30) ) title(main="Graph of % Disease Severity vs Temperature") par(new=TRUE) # don't start a new plot ## Get datapoints predicted by best fit line and confidence bands ## at every 0.01 interval xRange=data.frame(temperature=seq(min(temperature),max(temperature),0.01)) pred4plot <- predict( lm(diseasesev~temperature), xRange, level=0.95, interval="confidence" ) ## Plot lines derrived from best fit line and confidence band datapoints matplot( xRange, pred4plot, lty=c(1,2,2), #vector of line types and widths type="l", #type of plot for each column of y xlim=c(0,30), ylim=c(0,30), xlab="", ylab="" )

    Read the article

  • Extracting DCT coefficients from encoded images and video

    - by misha
    Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder? I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients. In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better. Presently, I use the fairly common OpenCV boilerplate to open images: IplImage *image = cvLoadImage(filename); // Run quality assessment metric The code I'm using for video is equally trivial: CvCapture *capture = cvCaptureFromAVI(filename); while (cvGrabFrame(capture)) { IplImage *frame = cvRetrieveFrame(capture); // Run quality assessment metric on frame } cvReleaseCapture(&capture); In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?

    Read the article

  • Making a visual bar timer for iPhone

    - by Ohmnastrum
    I've looked up all results for progress bars and changing the width of an image but it only refers to scaling, and the progress bars aren't customizable so that they fit other functions or design schemes... unless I missed that part. I'm trying to make a bar timer that crops off of the right over a period of time. I tried using an NStimer so that it would subtract from a value each time its function is called. the Timerbar function gets called as a result of another timer invalidating and it works. What doesn't work is that the width isn't changing just the position. further more I keep getting values like Inf and 0 for power and pwrBarWidth I was sure that the changes would occur when Mult was plugged into the equation. it seems like casting mult as an int is causing problems but i'm not sure exactly how. int pwrBarMaxWidth = 137; int pwrBarWidth 0; int limit = 1; float mult; float power = 0; -(void) Timerbar:(NSTimer *)barTimer { if(!waitForPlayer) { [barTimer invalidate]; } if(mult > 0.0) { mult -= 0.001 * [colorChoices count]; if(mult < 0.0) { mult = 0.0; } } power = (mult * 10) / pwrBarMaxWidth; pwrBarWidth = (int)power % limit; // causes the bar to repeat after it reaches a certain point //At this point however the variable Power is always "inf" and PwrBarWidth is always 0. [powerBar setBounds:CGRectMake(powerBar.frame.origin.x, powerBar.frame.origin.y,pwrBarWidth,20)]; //supposed to change the crop of the bar } Any reason why I'm getting inf as a value for power, 0 as a value for pwrBarWidth, and the bar itself isn't cropping? if this question is a bit vague i'll provide more information as needed.

    Read the article

  • Frames with PHP

    - by user562123
    Hi guys Can I use HTML Frames with PHP? I presumed I can do this by.. <?php session_start(); require("auth.php"); require("do_html_header.php"); if($_SESSION['SESS_admin'] == 0) require("do_menu.php"); else require("do_menu3.php"); do_html_header(); print"<h1>Welcome ". $_SESSION['SESS_FIRST_NAME']."!</h1>"; do_menu(); ?> </body> <frameset rows="50%,50%"> <frame noresize="noresize" src="limits.php" /> <frame noresize="noresize" src="limits.php" /> </frameset> </html> I have put it everywhere but it seems not to show up.. Google just confused me. Thanks in Advance :D

    Read the article

  • How to resize a UISwitch?

    - by mshsayem
    I have made a custom UISwitch (from this post). But the problem is, my custom texts are a bit long. Is there any way to resize the switch? [I tried setBounds, did not work] Edit: Here is the code I used: @interface CustomUISwitch : UISwitch - (void) setLeftLabelText: (NSString *) labelText; - (void) setRightLabelText: (NSString *) labelText; @end @implementation CustomUISwitch - (UIView *) slider { return [[self subviews] lastObject]; } - (UIView *) textHolder { return [[[self slider] subviews] objectAtIndex:2]; } - (UILabel *) leftLabel { return [[[self textHolder] subviews] objectAtIndex:0]; } - (UILabel *) rightLabel { return [[[self textHolder] subviews] objectAtIndex:1]; } - (void) setLeftLabelText: (NSString *) labelText { [[self leftLabel] setText:labelText]; } - (void) setRightLabelText: (NSString *) labelText { [[self rightLabel] setText:labelText]; } @end mySwitch = [[CustomUISwitch alloc] initWithFrame:CGRectZero]; //Tried these, but did not work //CGRect aFrame = mySwitch.frame; //aFrame.size.width = 200; //aFrame.size.height = 100; //mySwitch.frame = aFrame; [mySwitch setLeftLabelText: @"longValue1"]; [mySwitch setRightLabelText: @"longValue2"];

    Read the article

  • How to add clear option to this whiteboard?

    - by swift
    i have to add clear screen option to my whiteboard application, usual procedure is to draw a fill rect to the sizeof the image. But in my app i have transparent panels added one above the other i.e as layers, if i follow the usual procedure the drawing from the underlying panel wont be visible. please tell me any logic to do this. public void createFrame() { JFrame frame = new JFrame(); JLayeredPane layerpane=frame.getLayeredPane(); board= new Whiteboard(client); //board is a transparent panel // tranparent image: board.image = new BufferedImage(590,690, BufferedImage.TYPE_INT_ARGB); board.setBounds(74,23,590,690); board.setImage(image); virtualboard.setImage(image); //virtualboardboard is a transparent panel virtualboard.setBounds(74,23,590,690); JPanel background=new JPanel(); background.setBackground(Color.white); background.setBounds(74,25,590,685); layerpane.add(board,new Integer(5)); layerpane.add(virtualboard,new Integer(4));//Panel where remote user draws layerpane.add(background,new Integer(3)); layerpane.add(board.colourButtons(),new Integer(2)); layerpane.add(board.shapeButtons(),new Integer(1)); layerpane.add(board.createEmptyPanel(),new Integer(0)); }

    Read the article

  • How can I set the Jbuttons in a preferred order?

    - by Umzz Mo
    I have a grid of 30 buttons, and I want to have it from left to right then goes down, right to left, down again left to right. Basically the numbering would be as follow: 1 2 3 4 5 6 7 8 9 10 20 19 18 17 16 15 14 13 12 11 21 22 23 24 25 26 27 28 29 30 So if I have a piece on the button 10 and I instruct it to move up 2 bits it would land on 12 not 19. Below is my Code: //Creates the button using the loop, adds it into the panel and frame. JPanel panel = new JPanel(); panel.setLayout(new GridLayout(3,10)); JButton [] buttons = new JButton[30]; for(int i=0;i<30;i++){ buttons[i] = new JButton("label" + i); buttons[i].setBackground(Color.white); //Puts the player 1 piece on button 1,3,5,7,9 and player 2 piece on button 2,4,6,8,10 if (i < 10) { if (i%2 == 0) { buttons[i].setIcon(piece1); } else { buttons[i].setIcon(piece2); } } panel.add(buttons[i]); } frame.add(panel, BorderLayout.CENTER);

    Read the article

  • OpenGL Pixel Format Attributes (NSOpenGLPixelFormatAttibutes) explanation?

    - by nacho4d
    Hi, I am not new to OpenGL, but not an expert. Many tutorials teach how to draw, 3D, 2D, projections, orthogonal, etc, but How about setting a the view? (NSOpenGLView in Cocoa, Macs). For example I have this: - (id) initWithFrame: (NSRect) frame { GLuint attribs[] = { //PF: PixelAttibutes NSOpenGLPFANoRecovery, NSOpenGLPFAWindow, NSOpenGLPFAAccelerated, NSOpenGLPFADoubleBuffer, NSOpenGLPFAColorSize, 24, NSOpenGLPFAAlphaSize, 8, NSOpenGLPFADepthSize, 24, NSOpenGLPFAStencilSize, 8, NSOpenGLPFAAccumSize, 0, 0 }; NSOpenGLPixelFormat* fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes: (NSOpenGLPixelFormatAttribute*) attribs]; return self = [super initWithFrame:frame pixelFormat: [fmt autorelease]]; } And I don't understand very well their usage, specially when combining them. For example: If I want my view to be capable of full screen should I write NSOpenGLPFAFullScreen only ? or both? (by capable I mean not always in full screen) Regarding Double Buffer, what is this exactly? (Below: Apple's definition) If present, this attribute indicates that only double-buffered pixel formats are considered. Otherwise, only single-buffered pixel formats are considered Regarding Color: if NSOpenGLPFAColorSize is 24 and NSOpenGLPFAColorSize is 8 then it means that alpha and RGB components are treated differently? what happen if I set the former to 32 and the later to 0? Etc, etc,In general how do I learn to set my view from scratch? Thanks in advance. Ignacio.

    Read the article

  • Can't access media assets in an SWC

    - by Sold Out Activist
    I'm having trouble accessing content in an SWC. The main project compiles without error, but the assets (sound, music) aren't displayed or played. My workflow: i. Using Flash CS5 1. Create MAsset.fla 2. Import sounds, art 3. Assign class names, check export in frame 1 4a. Write out the classes in the actions panel in frame 1 4b. OR. Add a document class and write out the classes there 5. Export SWC. Filesize is similar to what it is when I directly import assets in the main project library. 6. Create Project.fla and document class Project.as 7. Import SWC into main project through the Actionscript panel. 8. Add code, which calls the class names from the SWC (e.g. DCL_UI_MOUSE) 9. Compile. No compiler errors, but nothing doing. And the resulting SWF filesize doesn't reflect anything more than the compiled code from the main project. Regarding step 4, if I just write the class name in the root timeline or document class, the compiler error will go away and the asset appear to be compiled in the SWC. But I have also tried: var asset0000:DCL_UI_MOUSE; And: var asset0000:DCL_UI_MOUSE = new DCL_UI_MOUSE(); Regardless the assets don't make it into the final SWF. What am I doing wrong?

    Read the article

  • UIScrollView -> UIView ->UIImageView

    - by RapidM
    Hi everyone, I am trying to create a horizontal UIScrollView with a few images in it. The user should be able to scroll horizontal between all the images. If he holds down the finger the scrolling should stop. I though about making a UIScrollView (320 x 122 px) UIView for the content + really big width UIImageView for the individual images Here is my code: headScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, 320, 122)]; [self.view addSubview:headScrollView]; // Dafür einen contentView programmatisch festlegen headContentView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 500, headScrollView.frame.size.height)]; [headContentView setBackgroundColor:[UIColor lightGrayColor]]; [headScrollView addSubview:headContentView]; [headScrollView setContentSize:headContentView.frame.size]; headImageView = [[UIImageView alloc] init]; NSMutableArray *elements = [appDelegate getElements]; for (int i = 0; i < [elements count]; i++) { headImageView.image = [UIImage imageNamed:[[elements objectAtIndex:i] nameOfElement]]; [headContentView addSubview:headImageView]; } But it's not working. There is no image there :-( When I do a NSLOG i get 3 names of the images, so it should work... Any ideas? Thanks!

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • game currency convert: math efficient

    - by Comradsky
    For variables:4 text views named diamondText, goldText, silverText, and bronzeText;money variable unsigned int money;and an NSTimer, every .1 sec,runs function: -(void)updateMoney{ money++; bronzeText.text = [NSString stringWithFormat:@"%d",money]; silverText.text = [NSString stringWithFormat:@"%d",money%10]; goldText.text = [NSString stringWithFormat:@"%d",money%100]; diamondText.text= [NSString stringWithFormat:@"%d",money%1000]; } Given that my currency is diamond = 10 gold = 10 silver = 10 bronze = 1; What would be most efficient way to calculate and display the money labels? How would you store this variable, with GameCenter and NSDictionary or GameCenter and something else? More details are below if you don't understand. To clarify: bronze has the last 2 numbers, silver has the next 2 numbers, and so on. I understand I could use 4 ints or an array, but i would rather try to use this method, unless theres a much more efficient way. Example: When money = 1000;bronzeText = nothing, silverText = 10,goldText = nothing, diamondText = nothing; What other ways would you do this that you think would be more efficient? I will be calling a function (void)collisionDetector that detects if my player.frame crosses with a flyingObject.frame, and if that object is a coin it gives an added value to money and then calls (void)updateMoney. Im just using the timer to test this and spawn these flying objects.

    Read the article

  • Python: Calling method A from class A within class B?

    - by Tommo
    There are a number of questions that are similar to this, but none of the answers hits the spot - so please bear with me. I am trying my hardest to learn OOP using Python, but i keep running into errors (like this one) which just make me think this is all pointless and it would be easier to just use methods. Here is my code: class TheGUI(wx.Frame): def __init__(self, title, size): wx.Frame.__init__(self, None, 1, title, size=size) # The GUI is made ... textbox.TextCtrl(panel1, 1, pos=(67,7), size=(150, 20)) button1.Bind(wx.EVT_BUTTON, self.button1Click) self.Show(True) def button1Click(self, event): #It needs to do the LoadThread function! class WebParser: def LoadThread(self, thread_id): #It needs to get the contents of textbox! TheGUI = TheGUI("Text RPG", (500,500)) TheParser = WebParser TheApp.MainLoop() So the problem i am having is that the GUI class needs to use a function that is in the WebParser class, and the WebParser class needs to get text from a textbox that exists in the GUI class. I know i could do this by passing the objects around as parameters, but that seems utterly pointless, there must be a more logical way to do this that doesn't using classes seem so pointless? Thanks in advance!

    Read the article

  • How i pass my view to nib at run time(Dynamically) in viewController class?

    - by Rajendra Bhole
    Hi, I want to create viewController class programmatically in which the view of nib file of viewController is not fixed. I have a UntitledViewController class that contains 4 UIButtons: -(IBAction)buttonPressed:(id)sender{ if((UIButton *)sender == button1){ tagNumber = button1.tag; NewViewController *newVC = [[NewViewController alloc] init]; //newVC.tagN [self.navigationController pushViewController:newVC animated:YES]; NSLog(@"Button Tag Number:- %i", tagNumber); } if((UIButton *) sender == button2){ tagNumber = button2.tag; NewViewController *newVC = [[NewViewController alloc] init]; [self.navigationController pushViewController:newVC animated:YES]; } if((UIButton *) sender == button3){ tagNumber = button3.tag; NewViewController *newVC = [[NewViewController alloc] init]; [self.navigationController pushViewController:newVC animated:YES]; } if((UIButton *) sender == button4){ int x = tagNumber; tagNumber = button4.tag; NewViewController *newVC = [[NewViewController alloc] init]; newVC.untitledVC = self; [self.navigationController pushViewController:newVC animated:YES]; } } When I will click any one of the UIButtons, I want to create a new NewViewController class with button tag property but without fixed nib file. In NewViewController's loadView event I want to fix the view of the nib of NewViewController class using UIButton's tag property (In NewViewController.xib I'm creating 4 UIViews named view1, view2, view3 and view4). -(void)loadView{ [super loadView]; CGRect frame = CGRectMake(0, 0, 320, 480); UIView *newView = [[UIView alloc] initWithFrame:frame]; newView.tag = untitledVC.tagNumber; self.view1 = newView; } Now I want that when I click button1 the NewViewController will be generated with view1 and when I click button2 the NewViewController will be generated with view2 and so on with different implementation. Thanks.

    Read the article

  • AS3---TypeError: Error #1009: Cannot access a property or method of a null object reference

    - by user571620
    I'm very new to flash and I really have no idea what I'm doing. Thank you in advance. its giving me that error after I click a button to go to another frame. After I get the error, some buttons do not go to its destination and instead it just does nothing. The error is as follows: TypeError: Error #1009: Cannot access a property or method of a null object reference. at wmhssports_fla::MainTimeline/frame39() Here is the code for frame 39: stop(); winter_btn.addEventListener(MouseEvent.CLICK, buttonClick1); function buttonClick1(event:MouseEvent):void{ gotoAndPlay(39); }; spring_btn_boys.addEventListener(MouseEvent.CLICK, buttonClick10); function buttonClick10(event:MouseEvent):void{ gotoAndPlay(114); }; fall_btn_boys.addEventListener(MouseEvent.CLICK, buttonClick11); function buttonClick11(event:MouseEvent):void{ gotoAndPlay(135); }; Edit: I could email the file to someone, if they could look at it for me? Its REALLY sketchy due to my inexperience with flash, but then again its not that big of a clip. email me at: [email protected]

    Read the article

  • Moving Computer Player in Xcode

    - by user1631497
    I am working on a 2d game using xcode and objective-c. I am trying to make my enemy player move left if I am left of him, until he is touching my character. The same will go when I am right of him. Basically just trying to get him to move towards me. But my code to do so is not working. So I have my code that says the following -(void)moveTowardsCharacter { enemyLeftTimer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(goEnemyLeft) userInfo:nil repeats:YES]; if (enemyLeftTimer == nil){ enemyLeftTimer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(goEnemyLeft) userInfo:nil repeats:YES]; } } -(IBAction)enemyStopLeft { if (CGRectIntersectsRect(bluebox.frame, redbox.frame)) { [enemyLeftTimer invalidate]; enemyLeftTimer = nil; } } -(void)goEnemyLeft { if (redbox.center.x + [self getRedboxWidthLeft] > bluebox.center.x + [self getBlueboxWidthRight]) { redbox.center = CGPointMake(redbox.center.x -5, redbox.center.y); } } So the first bit is saying for it to set up a timer to continue moving left. The next will stop the timer if my player(bluebox) and the Computer player(redbox) collide. Finally, the thing that actually tells it to go left, but only if the bluebox is left of the redbox. The getRedBoxWidth Left and getBlueBoxWidthRight are just voids that get, well, the left and right edges. If you could, please help try and solve this code. Thank You.

    Read the article

  • C# Reflection StackTrack get value

    - by John
    I'm making pretty heavy use of reflection in my current project to greatly simplify communication between my controllers and the wcf services. What I want to do now is to obtain a value from the Session within an object that has no direct access to HttpSessionStateBase (IE: Not a controller). For example, a ViewModel. I could pass it in or pass a reference to it etc. but that is not optimal in my situation. Since everything comes from a Controller at some point in my scenario I can do the following to walk the sack to the controller where the call originated, pretty simple stuff: var trace = new System.Diagnostics.StackTrace(); foreach (var frame in trace.GetFrames()) { var type = frame.GetMethod().DeclaringType; var prop = type.GetProperty("Session"); if(prop != null) { // not sure about this part... var value = prop.GetValue(type, null); break; } } The trouble here is that I can't seem to work out how to get the "instance" of the controller or the Session property so that I can read from it.

    Read the article

  • Improving the efficiency of multiple concurrent Core Animation animations

    - by Alex
    I have a view in my app that is very similar to the month view in the built-in Calendar app. There's a subview that holds the individual cells (a custom UIView subclass that draws text into its layer), and when the user navigates to the next "month", I create the new cells and slide the view to show them. When the animation stops, I remove the old, hidden cells and set things up so it's ready to go for the next animation. This all works nicely. However, I'd like to animate the cells' text color, as in the Calendar app, so that the outgoing ones transition to a lighter color and the incoming ones transition to a darker color. The problems is that I can have as many as 70 cells, so doing individual animations is very slow -- between 5-10 fps on my iPhone 3GS. I'm trying to find a less computationally intense way of doing this. My reading of the Shark results is that the majority of the time is spent redrawing the text for each frame for each frame. This makes sense, since text rendering is hardly the cheapest operation. I've considered creating a second view -- one holding the "outgoing" state and one holding the "incoming" state and using a single opacity animation to gradually reveal the updated cells while both are sliding. I'm concerned that instead of having 70 cells, I'll have 140, which seems like a lot of views. So, is that too many views or would there be a better way of doing this?

    Read the article

  • How to set the attributes of cell progamatically without using nib file ?

    - by srikanth rongali
    - (void)viewDidLoad { [super viewDidLoad]; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. self.title = @"Library"; self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Close" style:UIBarButtonItemStyleBordered target:self action:@selector(close:)]; self.tableView.rowHeight = 80; } -(void)close:(id)sender { // } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; UILabel *dateLabel = [[UILabel alloc]init]; dateLabel.frame = CGRectMake(85.0f, 6.0f, 200.0f, 20.0f); dateLabel.tag = tag1; [cell setAccessoryType:UITableViewCellAccessoryDisclosureIndicator]; cell.contentView.frame = CGRectMake(0.0f, 0.0f, 320.0f, 80.0f); [cell.contentView addSubview:dateLabel]; [dateLabel release]; } // Set up the cell... //[(UILabel *)[cell viewWithTag:tag1] setText:@"Date"]; cell.textLabel.text = @"Date"; return cell; } But the control is not entering into the tableView: method. So I could not see any label in my table. How can I make this ? Thank You

    Read the article

  • Using large transparent pictures as texture atlases

    - by azlisum
    i'm new to android programming and i'm trying to create a relatively big 2D game. I have to use lots of images and objects in my game so I decided to use OpenGL ES. I have several texture atlases, all of them saved as png's because of the transparency. I also know, but i'm not sure why, that I have to use images, which height and width is multiple of two. I test my game on an old HTC Hero running Android 2.3.3. When my picture atlases are 512x512 each, my game has a frame rate of between 50 to 60 fps. When I use 1024x1024 non transparent png, there is no problem - the FPS is again between 50 to 60 fps. But when i decide to use a 1024x1024 transperent PNG's my frame rate drops to 4,5 fps. Could this be a problem related to the age of the device i'm using for testing? These are the OpenGL functions I use each loop to draw batches: gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //drawing happens here gl.glDisable(GL10.GL_BLEND); Thanks in advance :)

    Read the article

  • How to limit EditText lines to 1 by coding (ignoring enter)?

    - by Vahe Musinyan
    I am trying to ignore the enter key, but i do not want to use onKeyDown() function. There is a way to do this in xml: 1. android:maxLines = "1" 2. android:lines = "1" 3. android:singleLine = "true" I actually want to do the last one by coding. Does anyone know how to do that? for (int i=0; i<numClass; i++) { temp_ll = new LinearLayout(this); temp_ll.setOrientation(LinearLayout.HORIZONTAL); temp1 = new EditText(this); InputFilter[] FilterArray = new InputFilter[1]; FilterArray[0] = new InputFilter.LengthFilter(12); temp1.setFilters(FilterArray); // set edit text length to max 12 temp1.setHint(" class name "); temp1.setSingleLine(true); temp_ll.addView(temp1); frame.addView(temp_ll); } ll.addView(frame);

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • openconnect on ubuntu 14.04LTS I get "XML response has no "auth" node"

    - by Jas
    I run openconnect to connect to juniper as following $ openconnect --version OpenConnect version v5.02 Using GnuTLS. Features present: PKCS#11, TOTP software token, DTLS (using OpenSSL) sudo openconnect -v -u=myuser --no-xmlpost --no-proxy https://myserver Got HTTP response: HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Date: Mon, 25 Aug 2014 07:24:03 GMT x-frame-options: SAMEORIGIN Pragma: no-cache Cache-Control: no-store Expires: -1 Transfer-Encoding: chunked HTTP body chunked (-2) XML response has no "auth" node Failed to obtain WebVPN cookie can anyone help please?

    Read the article

  • Errors when switching to specific static IP

    - by michaelc
    I had a Fedora box running using my static IP 69.169.136.6, etc, all configured according to what the ISP required. Just recently the hard drive failed (and I should have been keeping better backups) - while it is being recovered I would like to put up a webpage on my Archlinux PC explaining the problem - I presently do not have sufficient access to change the DNS record assigned to the domain. When I change my ip address while my system is running to 69.169.136.6, ifconfig reports the new ip address, but http://whatismyip.com/ does not. When I change it and reboot, I can't ping - the message I recieve is "connect: Network is unreachable" (when given one of google.com 's IP addresses - hostnames give me ping: unknown host xxx). Until I have access to the DNS system, what can I do to make this work? Edit: With new IP address, same problem, IP is now 69.169.136.29. Some commands might be useful: #ping 69.169.136.1 PING 69.169.136.1 (69.169.136.1) 56(84) bytes of data. 64 bytes from 69.169.136.1: icmp_seq=1 ttl=64 time=0.377 ms #ping 69.169.190.211 connect: Network is unreachable #ping 208.72.160.67 connect: Network is unreachable #ifconfig eth0 Link encap:Ethernet HWaddr 00:E0:4D:97:23:9B inet addr:69.169.136.29 Bcast:69.169.137.255 Mask:255.255.254.0 inet6 addr: fe80::2e0:4dff:fe97:239b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:132091 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9635179 (9.1 Mb) TX bytes:1322 (1.2 Kb) Interrupt:29 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2480 (2.4 Kb) TX bytes:2480 (2.4 Kb) #ip route 69.169.136.0/23 dev eth0 proto kernel scope link src 69.169.136.29 #cat /etc/resolv.conf # Generated by dhcpcd #nameserver 208.67.222.222 #nameserver 208.67.220.220 nameserver 69.169.190.211 nameserver 208.72.160.67 # /etc/resolv.conf.tail can replace this line Update: have new static IP addresses, verified to work in Windows... Relevant portions of /etc/rc.conf below: #Static IP example #eth0="eth0 69.169.136.6 netmask 255.255.254.0 broadcast 69.169.136.1" #eth0="eth0 69.169.136.29 netmask 255.255.254.0 broadcast 69.169.137.255" eth0="eth0 69.169.136.32 netmask 255.255.254.0 broadcast 69.169.137.255" #eth0="dhcp" INTERFACES=(eth0) # Routes to start at boot-up (in this order) # Declare each route then list in ROUTES # - prefix an entry in ROUTES with a ! to disable it # #gateway="default gw 192.168.0.1" gateway="default gw 69.169.136.1" #gateway="69.169.136.1" ROUTES=(!gateway) #ROUTES=()

    Read the article

  • eth0:0 is configured but not listed in ifconfig output

    - by FractalizeR
    Hello. I have the following problem: My server was given two IPs from [b]different[/b] subnets. Now I am trying to configure the system to work properly. I have created [root@server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) HWADDR=00:30:48:DA:B1:0E DEVICE=eth0 BOOTPROTO=none BROADCAST=79.174.69.255 IPADDR=79.174.69.241 NETMASK=255.255.254.0 NETWORK=79.174.68.0 ONBOOT=yes GATEWAY=79.174.68.1 TYPE=Ethernet [root@server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0:0 # Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) HWADDR=00:30:48:DA:B1:0E DEVICE=eth0 BOOTPROTO=none BROADCAST=79.174.69.255 IPADDR=79.174.71.74 NETMASK=255.255.255.0 NETWORK=79.174.71.1 ONBOOT=yes GATEWAY=79.174.71.1 TYPE=Ethernet But both after "service network restart" and after "reboot" [root@server ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:30:48:DA:B1:0E inet addr:79.174.71.74 Bcast:79.174.71.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:feda:b10e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:910284 errors:0 dropped:0 overruns:0 frame:0 TX packets:2924 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:257964879 (246.0 MiB) TX bytes:232450 (227.0 KiB) Memory:df220000-df240000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:27 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6976 (6.8 KiB) TX bytes:6976 (6.8 KiB) Device eth0:0 is not shown as active. If I try [root@server ~]# ifconfig eth0:0 eth0:0 Link encap:Ethernet HWaddr 00:30:48:DA:B1:0E UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:df220000-df240000 It is shown as up and running, but IP is not assigned to it. Also it is strange, that IP address assigned to eth0:0 in config file is used by eth0. /var/log/messages shows nothing about network configuration errors on either eth0 or eth0:0. system-config-network seem to understand all settings correctly and resaves them ok also. "ifup eth0:0" executes ok, but ifconfig afterwards shows no eth0:0 device after that. What did I do wrong? May be the problem is that IPs are from different subnets?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >