Search Results

Search found 7188 results on 288 pages for 'frame buffer'.

Page 17/288 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to call a javascript function from one frame to another in Chrome/Webkit

    - by bambax
    I have developped an application that has a list of items in one frame; when one clicks on an item it does something in another frame (loads an image). This used to work fine in all browsers, including Chrome 3; now it still works fine in FF but in recent versions of Chrome (I believe since 4) it throws this error: Unsafe JavaScript attempt to access frame with URL (...) from frame with URL (...). Domains, protocols and ports must match. This is obviously a security "feature" but is it possible to get around it? Here is a simple test: index.html: <html> <frameset> <frame src="left.html" name="left"/> <frame src="right.html" name="right"/> </frameset> </html> left.html: <html> <body> <a href="javascript:parent.right.test('hello');">click me</a> </body> </html> right.html: <html> <body> <script> function test(msg) { alert(msg); } </script> </body> </html> The above works in FF 3.6 and Chrome 3 but in Chrome 5 it throws the above error...

    Read the article

  • Inconsistent Frame / Bounds of UIImage and UIScrollView

    - by iFloh
    this one puzzles me. I have an Image with a width of 1600 pixels and height of 1819 pixels. I load the image as UIImageView into an UIScrollView and set the "contentsize parameter" iKarteScrollView.contentSize = CGSizeMake(iKarteImageView.bounds.size.width, iKarteImageView.bounds.size.height); NSLog(@"Frame - width %.6f, height %.6f - Bounds - width %.6f, height %.6f", myImageView.frame.size.width, myView.frame.size.height, myImageView.bounds.size.width, myImageView.bounds.size.height); NSLog(@"Content size width %.6f, height %.6f", myScrollView.contentSize.width, myScrollView.contentSize.height); The NSLog shows the following: Frame - width 1600.000000, height 1819.000000 - Bounds - width 1600.000000, height 1819.000000 Content size width 1600.000000, height 1819.000000 Now comes the miracle, in a subsequent method of the same object I call the same NSLog again. But this time the result is Frame - width 405.000000, height 411.000000 - Bounds - width 1601.510864, height 1625.236938 Content size width 404.617920, height 460.000000 Why is the frame size suddely 405 by 411 pixels? How can the Bounds be 1601 by 1625 where 1625 is roughly 200 pixels less than the original size? When Positioning a further UIImageView at the coordinates of 20 by 1625, the UIImageView is displayed an estimated 200 pixels above the bottom of the content of the UIScrollView. I'm lost ...

    Read the article

  • ActionScript : Applying frame to a image / background?

    - by Jay
    I am editing a custom calendar application in flash. The purpose of this app is to let you select your own images, and create a calendar out of it. You can basically, drag and drop images of your choice and they apply frame/borders, or drag and drop embellishments. Here is the piece of code that draws a border/frame on the embellishment/image of your choice. tempListener.onLoadInit = function(target_mc:MovieClip) { var mcName = target_mc._name.substring(0, target_mc._name.indexOf("@", 0)); if(mcName == "frame_Image") { target_mc.onPress = function() { if(_root.selectedImage != null) { var index = this._name.substring(this._name.indexOf("@",0)+1, this._name.length); var objPath = nodesFrames.childNodes[index-1].attributes.image; if(_root.selectedImage._name.split("@")[0] == "image") { var mask = _root.selectedImage[_root.selectedImage._parent._name + "_" + _root.selectedImage._name + "_maskMc"]; frameImageWidth = mask._width; frameImageHeight = mask._height; frameImageXScale = -1; frameImageYScale = -1; } else { frameImageXScale = _root.selectedImage._xscale; frameImageYScale = _root.selectedImage._yscale; _root.selectedImage._xscale = 100; _root.selectedImage._yscale = 100; frameImageWidth = _root.selectedImage._width; frameImageHeight = _root.selectedImage._height; } if(_root.selectedImage["frame"]) {} else { _root.selectedImage.createEmptyMovieClip("frame", _root.selectedImage.getNextHighestDepth()); } var image_mcl1:MovieClipLoader = new MovieClipLoader(); image_mcl1.addListener(_root.mclFrameListener); image_mcl1.loadClip("Images/" + objPath, _root.selectedImage["frame"]); } } } I need to somehow apply the chosen frame image, to the entire background - not just to the embellishment or image. How do I go about this? Thanks in advance for your inputs. Please let me know if the question doesn't make sense, I will attach some images that can help you with the context.

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • How do I block my ISP from framing websites?

    - by PJB
    I've noticed recently, as of today, that all the websites I visit (except for a specific few sites such as Google) are all put into a frame. I'm not sure what the reason for this is, there are no ads displayed and everything appears normal. The only reason I found out is because the headers weren't loading correctly and none of the page titles showed up properly - I quickly checked the source code and saw that instead of the source of the page I was expecting to see, there was a single line with a frame. I first thought maybe somekind of trojan, but after going through various checks I've determined it's my ISP and/or somekind of Internet Registry (I traced the IP shown in the source code) What can be done to prevent this frame, short of using a VPN? I feel like I am being spied on. PS: I'm located in South Korea.

    Read the article

  • Network Error: no buffer space available

    - by braindump
    After some time of running fine, one of our Windows XP SP3 machines does not open some(!) new TCP/IP connections anymore. Putty says "Network Error: no buffer space available", IE won't open any new connections but e.g. network drive mappings still work, even new ones can be established. netstat does not show more open connections that usual, ping and dns lookups work fine. Any hints?

    Read the article

  • ActiveSync / Exchange 2007 password expiration buffer on device

    - by Matt Hamende
    I'm trying to determine if there is any buffer of time from the time a password expires in AD to the time that users would stop receiving email on their mobile devices our setup is Exchange 2007 ActiveSync DC's are Server 2008 R2 primarily Android shop, with maybe a few iOS devices I've heard some rumors of people still receiving email after their password expired / changed on the domain, just want to see if anyone else has ever heard of this. Did a bit more reading, read about Token Cache in IIS 7.0 and 15min lagtime, still would like to hear any thoughts about this.

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    Hi, I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • socket() failed: No buffer space available) while connecting to upstream,

    - by alfish
    On my ubuntu 10.04 VPS, I get regular 500 error on nginx (0.7.??)+ fcgi web server running a durpal site. and when I trace the nginx error log I see plenty of these: socket() failed: No buffer space available) while connecting to upstream ..., I have tried differnt combination configs but none fixed the problem. Currently I have 3 nginx workers, Keep-alive time out 15 seconds and and PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=1000 I really appreciate if you Can you suggest a solution to this annoying problem.

    Read the article

  • Unable to watch a high resolution and high frame rate video

    - by Abhijith Madhav
    I have a video of a tennis match whose Resolution = 1280 * 720 Codec = H264 Frame rate = 50fps (Copy paste from info given by totem media player) My laptop is not powerful enough to play this video smoothly. How can I reduce the frame-rate of this video so that my laptop can play it? I have observed that my laptop can play videos with 25fps without an issue. I use ubuntu. I wouldn't mind using windows to edit/re-encode this video.

    Read the article

  • java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small

    - by jack
    Hi I got an email from a user when he sees the following error output when he's using our web site. java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "WEB_OWNER.SSFP_GET_WE_OBJ", line 300 ORA-06512: at line 1 at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:137) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:315) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:281) This is error from oracle webconnect, Oracle Application Server Containers for J2EE 10g (10.1.2.3.0). Any idea?

    Read the article

  • Windows 7 Remote Desktop Connection Rendering Each Frame?

    - by TheDarkIn1978
    When connecting to my work computer over VPN and Remote Desktop Connection, images and videos are sent back one frame at a time, which really slows things down. I'm familiar with the "Experience" tab while connecting and the only options I have checked are "Visual styles" and "Persistent bitmap caching". Isn't it possible to work remotely that is more similar to screen sharing over Skype, where there is a lag but not every single frame has to be rendered and passed back to me one at a time? Both computers are running Windows 7 Professional.

    Read the article

  • Buffer requests in nginx while a symlink switches on backend

    - by Quintin Par
    In a release deployment I would like to buffer client requests that come to nginx(in reverse proxy) mode to be buffered for possibily 1-2 seconds while a pdsh request is sent to switch symlinks on the back end server to /var/www/html/current . After the switch is complete, I would want to release the buffering while avoiding a herd clash. Is this possible in nginx? Can someone help? Edit: The idea is not to loose requests and from nginx forums I've come to know that retries can sometimes results in CPU spins

    Read the article

  • MCE7 how to get video from livetv buffer

    - by Vnuk
    I'm watching something on my MCE and I can jump back to see something. How can I (if at all) extract a piece of that video that is recorded on my drive somewhere? If I press record it starts recording from now and past live buffer is lost. Is there a plugin that can do this?

    Read the article

  • Reverse Proxy that does not buffer uploads

    - by tsuraan
    From what I've seen of various reverse proxies (nginx, apache, varnish), they seem to buffer file uploads to disk before handing them off to the service they're proxying for. I need a reverse proxy that doesn't do this; I have a system that handles uploads itself, and buffering uploaded files to disk is not something that works for me. Does anybody know of a proxy server that can be configured to just pass traffic through to the proxied services without doing any buffering to disk?

    Read the article

  • TableView frame not resizing properly when pushing a new view controller and the keyboard is hiding

    - by Pete
    Hi, I must be missing something fundamental here. I have a UITableView inside of a NavigationViewController. When a table row is selected in the UITableView (using tableView:didSelectRowAtIndexPath:) I call pushViewController to display a different view controller. The new view controller appears correctly, but when I pop that view controller and return the UITableView is resized as if the keyboard was being displayed. I need to find a way to have the keyboard hide before I push the view controller so that the frame is restored correctly. If I comment out the code to push the view controller then the keyboard hides correctly and the frame resizes correctly. The code I use to show the keyboard is as follows: - (void) keyboardDidShowNotification:(NSNotification *)inNotification { NSLog(@"Keyboard Show"); if (keyboardVisible) return; // We now resize the view accordingly to accomodate the keyboard being visible keyboardVisible = YES; CGRect bounds = [[[inNotification userInfo] objectForKey:UIKeyboardFrameBeginUserInfoKey] CGRectValue]; bounds = [self.view convertRect:bounds fromView:nil]; CGRect tableFrame = tableViewNewEntry.frame; tableFrame.size.height -= bounds.size.height; // subtract the keyboard height if (self.tabBarController != nil) { tableFrame.size.height += 48; // add the tab bar height } [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(shrinkDidEnd:finished:contextInfo:)]; tableViewNewEntry.frame = tableFrame; [UIView commitAnimations]; } The keyboard is hidden using: - (void) keyboardWillHideNotification:(NSNotification *)inNotification { if (!keyboardVisible) return; NSLog(@"Keyboard Hide"); keyboardVisible = FALSE; CGRect bounds = [[[inNotification userInfo] objectForKey:UIKeyboardFrameBeginUserInfoKey] CGRectValue]; bounds = [self.view convertRect:bounds fromView:nil]; CGRect tableFrame = tableViewNewEntry.frame; tableFrame.size.height += bounds.size.height; // add the keyboard height if (self.tabBarController != nil) { tableFrame.size.height -= 48; // subtract the tab bar height } tableViewNewEntry.frame = tableFrame; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(_shrinkDidEnd:finished:contextInfo:)]; tableViewNewEntry.frame = tableFrame; [UIView commitAnimations]; [tableViewNewEntry scrollToNearestSelectedRowAtScrollPosition:UITableViewScrollPositionMiddle animated:YES]; NSLog(@"Keyboard Hide Finished"); } I trigger the keyboard being hidden by resigning first responser for any control that is the first responder in ViewWillDisappear. I have added NSLog statements and see things happening in the log file as follows: Show Keyboard ViewWillDisappear: Hiding Keyboard Hide Keyboard Keyboard Hide Finished PushViewController (an NSLog entry at the point I push the new view controller) From this trace, I can see things happening in the right order, but It seems like when the view controller is pushed that the keyboard hide code does not execute properly. Any ideas would be really appreciated. I have been banging my head against the keyboard for a while trying to find out what I am doing wrong.

    Read the article

  • Help with simple frame, and graphics in Java

    - by Crystal
    For hw, I'm trying to create a "CustomButton" that has a frame and in that frame, I draw two triangles, and a square over it. It's supposed to give the user the effect of a button press once it is depressed. So for starters, I am trying to set up the beginning graphics, drawing two triangles, and a square. The problem I have is although I set my frame to 200, 200, and the triangles I have drawn I think to the correct ends of my frame size, when I run the program, I have to extend my window to make the whole artwork, my "CustomButton," viewable. Is that normal? Thanks. Code: import java.awt.*; import java.awt.event.*; import javax.swing.*; public class CustomButton { public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { CustomButtonFrame frame = new CustomButtonFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setVisible(true); } }); } } class CustomButtonFrame extends JFrame { // constructor for CustomButtonFrame public CustomButtonFrame() { setTitle("Custom Button"); setSize(DEFAULT_WIDTH, DEFAULT_HEIGHT); CustomButtonSetup buttonSetup = new CustomButtonSetup(); this.add(buttonSetup); } private static final int DEFAULT_WIDTH = 200; private static final int DEFAULT_HEIGHT = 200; } class CustomButtonSetup extends JComponent { public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D) g; // first triangle coords int x[] = new int[TRIANGLE_SIDES]; int y[] = new int[TRIANGLE_SIDES]; x[0] = 0; y[0] = 0; x[1] = 200; y[1] = 0; x[2] = 0; y[2] = 200; Polygon firstTriangle = new Polygon(x, y, TRIANGLE_SIDES); // second triangle coords x[0] = 0; y[0] = 200; x[1] = 200; y[1] = 200; x[2] = 200; y[2] = 0; Polygon secondTriangle = new Polygon(x, y, TRIANGLE_SIDES); g2.drawPolygon(firstTriangle); g2.setColor(Color.WHITE); g2.fillPolygon(firstTriangle); g2.drawPolygon(secondTriangle); g2.setColor(Color.GRAY); g2.fillPolygon(secondTriangle); // draw rectangle 10 pixels off border g2.drawRect(10, 10, 180, 180); } public static final int TRIANGLE_SIDES = 3; }

    Read the article

  • XCode Build Parse Error 'Unexpected @ in Program'

    - by Grymjack
    I'm following a tutorial for creating an animation using Xcode Version 4.5.2 in Mountain Lion 10.8.2. When trying to build the code below, I get a Parse Error Unexpected '@' in program showing up for the 'hopAnimation=' line. While searching, I have found examples that build simple animations in a different way, but nothing that seems to address this particular problem. I'm a noob to XCode programming and if anyone could help me correct the syntax, I would highly appreciate it. I would also like to thank all the contributors to stackflow for making this such a valuable resource. Searching for the answers to most of my prior questions always seemed to have you guys at the top of the results list. ViewController.m - (void)viewDidLoad { // load all the frames of our animation into an array NSArray *hopAnimation; hopAnimation=[[NSArray alloc] arrayWithObjects: [UIImage imageNamed:@”frame-1.png”], [UIImage imageNamed:@”frame-2.png”], [UIImage imageNamed:@”frame-3.png”], [UIImage imageNamed:@”frame-4.png”], [UIImage imageNamed:@”frame-5.png”], [UIImage imageNamed:@”frame-6.png”], [UIImage imageNamed:@”frame-7.png”], [UIImage imageNamed:@”frame-8.png”], [UIImage imageNamed:@”frame-9.png”], [UIImage imageNamed:@”frame-10.png”], [UIImage imageNamed:@”frame-11.png”], [UIImage imageNamed:@”frame-12.png”], [UIImage imageNamed:@”frame-13.png”], [UIImage imageNamed:@”frame-14.png”], [UIImage imageNamed:@”frame-15.png”], [UIImage imageNamed:@”frame-16.png”], [UIImage imageNamed:@”frame-17.png”], [UIImage imageNamed:@”frame-18.png”], [UIImage imageNamed:@”frame-19.png”], [UIImage imageNamed:@”frame-20.png”],nil]; self.bunnyView1.animationImages=hopAnimation; self.bunnyView2.animationImages=hopAnimation; self.bunnyView3.animationImages=hopAnimation; self.bunnyView4.animationImages=hopAnimation; self.bunnyView5.animationImages=hopAnimation; self.bunnyView1.animationDuration=1; self.bunnyView2.animationDuration=1; self.bunnyView3.animationDuration=1; self.bunnyView4.animationDuration=1; self.bunnyView5.animationDuration=1; [super viewDidLoad]; }

    Read the article

  • Python: convert buffer type of SQLITE column into string

    - by Volatil3
    I am new to Python 2.6. I have been trying to fetch date datetime value which is in yyyy-mm-dd hh:m:ss format back in my Python program. On checking the column type in Python I get the error: 'buffer' object has no attribute 'decode'. I want to use the strptime() function to split the date data and use it but I can't find how to convert a buffer to string. The following is a sample of my code (also available here): conn = sqlite3.connect("mrp.db.db", detect_types=sqlite3.PARSE_DECLTYPES) cursor = conn.cursor() qryT = """ SELECT dateDefinitionTest FROM t WHERE IDproject = 4 AND IDstatus = 5 ORDER BY priority, setDate DESC """ rec = (4,4) cursor.execute(qryT,rec) resultsetTasks = cursor.fetchall() cursor.close() # closing the resultset for item in resultsetTasks: taskDetails = {} _f = item[10].decode("utf-8") The exception I get is: 'buffer' object has no attribute 'decode'

    Read the article

  • Visualize the depth buffer

    - by Thanatos
    I'm attempting to visualize the depth buffer for debugging purposes, by drawing it on top of the actual rendering when a key is pressed. It's mostly working, but the resulting image appears to be zoomed in. (It's not just the original image, in an odd grayscale) Why is it not the same size as the color buffer? This is what I'm using the view the depth buffer: void get_gl_size(int &width, int &height) { int iv[4]; glGetIntegerv(GL_VIEWPORT, iv); width = iv[2]; height = iv[3]; } void visualize_depth_buffer() { int width, height; get_gl_size(width, height); float *data = new float[width * height]; glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, data); glDrawPixels(width, height, GL_LUMINANCE, GL_FLOAT, data); delete [] data; }

    Read the article

  • realloc()ing memory for a buffer used in recv()

    - by Hristo
    I need to recv() data from a socket and store it into a buffer, but I need to make sure get all of the data so I have things in a loop. So to makes sure I don't run out of room in my buffer, I'm trying to use realloc to resize the memory allocated to the buffer. So far I have: // receive response int i = 0; int amntRecvd = 0; char *pageContentBuffer = (char*) malloc(4096 * sizeof(char)); while ((amntRecvd = recv(proxySocketFD, pageContentBuffer + i, 4096, 0)) > 0) { i += amntRecvd; realloc(pageContentBuffer, 4096 + sizeof(pageContentBuffer)); } However, this doesn't seem to be working properly since Valgrind is complaining "valgrind: the 'impossible' happened:". Any advice as to how this should be done properly? Thanks, Hristo

    Read the article

  • Vim: Delete Buffer When Quitting Split Window

    - by Rafid K. Abdullah
    I have this very useful function in my .vimrc: function! MyGitDiff() !git cat-file blob HEAD:% > temp/compare.tmp diffthis belowright vertical new edit temp/compare.tmp diffthis endfunction What it does is basically opening the file I am currently working on from repository in a vertical split window, then compare with it. This is very handy, as I can easily compare changes to the original file. However, there is a problem. After finishing the compare, I remove the split window by typing :q. This however doesn't remove the buffer from the buffer list and I can still see the compare.tmp file in the buffer list. This is annoying because whenever I make new compare, I get this message: Warning: File "temp/compare.tmp" has changed since editing started. Is there anyway to delete the file from buffers as well as closing the vertical split window?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >