Search Results

Search found 18066 results on 723 pages for 'screen capture'.

Page 198/723 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • OpenOffice Calc Macro: Run shell command and return output as result of custom function

    - by Mark
    I would like to write a custom OpenOffice function that runs a shell command and puts the result into the cell from which it was invoked. I have a basic macro working, but I can't find a way to capture the command's output. Function MyTest( c1 ) MyTest = Shell("bash -c "" echo hello "" ") End Function The above always returns 0. Looking at the documentation of the Shell command, I don't think it actually returns STDOUT. How would I capture the output so that I can return it in my function? Thanks!

    Read the article

  • Closure and nested lambdas in C++0x

    - by DanDan
    Using C++0x, how do I capture a variable when I have a lambda within a lambda? For example: std::vector<int> c1; int v = 10; <--- I want to capture this variable std::for_each( c1.begin(), c1.end(), [v](int num) <--- This is fine... { std::vector<int> c2; std::for_each( c2.begin(), c2.end(), [v](int num) <--- error on this line, how do I recapture v? { // Do something }); });

    Read the article

  • OutputDebugString + DebugView = not tabs!

    - by Steve
    I am dumping \t delimited data using using OutputDebugString and then use ex-Sysinternals DebugView to capture it. The problem is that all the data in DebugView appear to be space delimited, hence I need to perfrorm CTRL+H "\x20" "t" to replace spaces with the tabs before I can use it (I really need tab delimited data). Is there anyway to tell DebugView not to replace tabs with spaces? Or maybe there is a better tool available to capture output of the OutputDebugString function? Any ideas are very welcome!

    Read the article

  • How to intercept touch events globally?

    - by mystify
    I have an view which is sometimes covered by some other views. However, if the user slides the finger across the screen, I want to slide that underlying view across the screen, too. I could start making custom views for all those covering subviews and forward all kinds of touch events, but that's somewhat cumbersome. Maybe there's some kind of notification or another way that a UIView or UIControl subclass can be aware of touch events happening right now, no matter where they are. In short: I need an UIView subclass or UIControl subclass which knows about any touch events happening on the entire screen. Or at least if tht's not possible, knowing about any touch events happening above itself in the same underlying superview. Another description: There are 20 views, all reside inside the same superview. The first view is covered by 19 others. But if the user slides across the screen, that first view must slide too, so it must be aware of touch events. Is there any better solution that making all 19 views forward touch events? (yes, all 19 views respond to touch events in this example)

    Read the article

  • Method in RootViewController not Storing Array

    - by Antonio
    I have an array initialized in my RootViewController and a method that addsObjects to an array. I created a RootViewController object in my SecondViewController. The method runs (outputs a message) but it doesn't add anything to the array, and the array seems empty. Code is below, any suggestions? RootViewController.h #import "RootViewController.h" #import "SecondViewController.h" @implementation RootViewController - (void)viewDidLoad { [super viewDidLoad]; myArray2 = [[NSMutableArray alloc] init]; NSLog(@"View was loaded"); } -(void)addToArray2{ NSLog(@"Array triggered from SecondViewController"); [myArray2 addObject:@"Test"]; [self showArray2]; } -(void)showArray2{ NSLog(@"Array Count: %d", [myArray2 count]); } -(IBAction)switchViews{ SecondViewController *screen = [[SecondViewController alloc] initWithNibName:nil bundle:nil]; screen.modalTransitionStyle = UIModalTransitionStyleCoverVertical; [self presentModalViewController:screen animated:YES]; [screen release]; } SecondViewController.m #import "SecondViewController.h" #import "RootViewController.h" @implementation SecondViewController -(IBAction)addToArray{ RootViewController *object = [[RootViewController alloc] init]; [object addToArray2]; } -(IBAction)switchBack{ [self dismissModalViewControllerAnimated:YES]; } EDIT***** With Matt's code I got the following error: " expected specifier-qualifier-list before 'RootViewController' "

    Read the article

  • Linux, C++ audio capturing (just microphone) library

    - by TheOm3ga
    I'm developing a musical game, it's like a singstar but instead of singing, you have to play the recorder. It's called oFlute, and it's still in early development stage. In the game, I capture the microphone input, then run a simple FFT analysis and compare the results to typical recorder's frequencies, thus getting the played note. At the beginning, the audio library I was using was RtAudio, but I don't remember why I switched to PortAudio, which is what I'm currently using. The problem is that, from time to time, either it crashes randomly or stops capturing, like if there were no sound coming from the microphone. My question is, what's the best option to capture microphone input on Linux? I just need to open, read, and close a flow of bytes from the microphone. I've been reading this guide, and (un)surprisingly it says: I don't think that PortAudio is very good API for Unix-like operating systems. So, what do you recommend me?

    Read the article

  • How to step frames in a movie recorded by iPhone

    - by aron
    Is there a way to have my iPhone program step frame by frame through a movie recorded by the iPhone? What I want to do is have the user record a quicktime movie, then be able to step through the movie frame by frame. Alternately, I suppose if there was a way to extract every single frame from the movie to a jpg, then I could easily step through the pictures. Anyone know of a way to do this??? I suppose the third option (which might not get past Apple's store) is to capture the movie the way the old jailbroken apps did, which is somehow capture the pictures directly from the camera view???? Any help appreciated. Thanks in advance!!!!

    Read the article

  • Get active window title in X

    - by dutt
    I'm trying to get the title of the active window. The application is a background task so if the user has Eclipse open the function returns "Eclipse - blabla", so it's not getting the window title of my own window. I'm developing this in Python 2.6 using PyQt4. My current solution, borrowed and slightly modified from an old answer here at SO, looks like this: def get_active_window_title(): title = '' root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) for j in id_w.stdout: if 'WM_ICON_NAME(STRING)' in j: if title != j.split()[2]: return j.split("= ")[1].strip(' \n\"') It works for most windows, but not all. For example it can't find my kopete chat windows, or the name of the application i'm currently developing. My next try looks like this: def get_active_window_title(self): screen = wnck.screen_get_default() if screen == None: return "Could not get screen" window = screen.get_active_window() if window == None: return "Could not get window" title = window.get_name() return title; But for some reason window is always None. Does somebody have a better way of getting the current window title, or how to modify one of my ways, that works for all windows? Edit: In case anybody is wondering this is the way I found that seems to work for all windows. def get_active_window_title(self): root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) id_w.wait() buff = [] for j in id_w.stdout: buff.append(j) for line in buff: match = re.match("WM_NAME\((?P<type>.+)\) = (?P<name>.+)", line) if match != None: type = match.group("type") if type == "STRING" or type == "COMPOUND_TEXT": return match.group("name") return "Active window not found"

    Read the article

  • Regex pattern help (I almost have it, just need a bit of expertise to finish it)

    - by Mohammad
    I need to match two cases js/example_directory/example_name.js and js/example_directory/example_name.js?12345 (where 12345 is a digit string of unknown length and the directory can be limitless in depth or not exist at all) I need to capture in both cases everything between js/ and .js and if ? exists capture the digit string after ? This is what I have so far ^js/(.*).js\??(\d+)? This works except it also captures js/example_directory/example_name.js12345 I want the regex to ignore that. Any suggestions? Thank you all! Test your patterns here

    Read the article

  • Mobile detection - Meta tag and max-device-width vs. php user agent?

    - by nimmbl
    Which form of mobile detection should I use and why? <meta name="viewport" content="width=320,initial-scale=1,maximum-scale=1.0,user-scalable=no" /> <link media="only screen and (max-device-width: 480px) and (min-device-width: 320px)" href="css/mobile.css" type= "text/css" rel="stylesheet"> <link media="handheld, only screen and (max-device-width: 319px)" href="css/mobile_simple.css" type="text/css" rel="stylesheet" /> Or include('mobile_device_detect.php'); $mobile = mobile_device_detect(); And why on earth would this: <?php if(strpos($_SERVER['HTTP_USER_AGENT'], 'iPhone') !== false) { ?> <meta name="viewport" content="width=320,initial-scale=1,maximum-scale=1.0,user-scalable=no" /> <link media="screen" href="css/mobile.css" type= "text/css" rel="stylesheet"> <?php } else { ?> <link media="screen" href="css/mobile_simple.css" type= "text/css" rel="stylesheet"> <?php } ?> ignore this css? body { background: -webkit-gradient(linear, left top, left bottom, from(#555), to(#000)); }

    Read the article

  • How do I use comma-separated-value file received from a URL query in Objective-c?

    - by chiemekailo
    How do I use comma-separated-value file received from a URL query in Objective-c? when I query the URL I get csv such as "OMRUAH=X",20.741,"3/16/2010","1:52pm",20.7226,20.7594. How do I capture and use this for my application? My problem is creating/initializing that NSString object in the first place. An example is this link "http://download.finance.yahoo.com/d/quotes.csv?s=GBPEUR=X&f=sl1d1t1ba&e=.csv" which returns csv. I dont know how to capture this as an object since I cannot use an NSXMLParser object.

    Read the article

  • CSS to specify positions over a scanned document

    - by itsols
    I'm trying to write the CSS rules to position text over a scanned document. Reason: The document is a pre-printed form. I am trying to position the text on-screen so that it relates to the 'spaces' on the actual form. Issue: Although I position the values using centimeters, they don't seem to get aligned with the ones on the actual page. I can see this misalignment since my scanned image is in the background of the page. What I've tried: I used a ruler to physically measure the locations and specify them with CSS. But on-screen, it doesn't tally. I used the scanned image to position the CSS values. Then the printout is not correct. I even scaled the scanned page using Inkscape to the exact dimensions in centimeters and took into account all margins, etc... What I need: I am trying to correctly show the output values on-screen AND have them print in the correct manner as well. I know that using two CSS sheets (one for print) is an option. But I'm developing this program away from where the actual printing is to be done. So is there a convenient way of matching the exact screen locations with those on the actual/final prinout? Thanks!

    Read the article

  • How to call a method from another class that's been instantiated within the current class

    - by Pavan
    my screen has a few views like such __________________ | _____ | | | | | //viewX is a video screen | | | | | viewX | vY | | //viewY is a custom uiview i created. | |____| | //it contains a method which i would like to call that toggles |_________________| //the hidden property of this view. and when it hides, a little | | //button is replaced no the top right corner on top of viewX | viewZ | //the video layer | | |_________________| //viewZ is a view containing many square views - thumbnails. my question is, i dont know how to register for touch events so that it recognises any touch event on no matter which view the user touches the screen.. atm im handling the touch events for each view inside it. so all works well... however what im trying to do is that when the user taps anywhere else on the screen but on viewY, viewY should dissapear by calling that method in the viewY class. this viewY class is instantiated and has no xib file attached to it. the uiview is created progammatically in the viewY class. this whole class for viewY behviour is instantiated in viewX - the video view. my boss says add delegates.. although i have now clue how to do that... any help? is there anyway i can just make it really simple and be able to say REMOVE VIEW no matter which class im calling from? Also ive seen other people achieve this by using these funky arrows - ... <- etc.. although im not sure if thats what i need or how to implement such a thing. ah i think ive made my question quite complicated but i really mean it to be a simple one, and know it can be done in an easy way!

    Read the article

  • smart phone UI limitations

    - by Manny
    I would like to know, what limitations there are for how far one can go in terms of replacing UI components of current touch screen smart phones, in particular iPhone, Blackberry and android based phones. What I would like to do is create a custom UI for dialing out and incoming calls. I have some experience with Blackberry development. The theme builder for it, can be used to customize certain items on the incoming call screen, but it doesn't look like that you can increase the size of answer button. I know Blackberry also gives you access to all the phone APIs, but I'm not sure that you can create your own UI that can gain preference over the Blackberry incoming call screen. And if you try to customize the incoming call screen by adding any buttons to it, they would be rendered as pictures. I could possibly design a complete UI for android, since different manufactures have different UI for android based phones. Can I do what I want to do using iPhone, Blackberry or android? Or any other phone for that matter? I am guessing may be for Nokia phones using Qt, but I prefer the 3 platforms I listed. Thanks for all your help.

    Read the article

  • Android Video Camera : still picture

    - by Alex
    I use the camera intent to capture video. Here is the problem: If I use this line of code, I can record video. But onActivityResult doesn't work. Intent intent = new Intent("android.media.action.VIDEO_CAMERA"); If I use this line of code, after press the recording button, the camera is freezed, I mean, the picture is still. Intent intent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE); BTW, when I use $Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); to capture a picture, it works fine. The java file is as follows: package com.camera.picture; import java.io.File; import java.text.SimpleDateFormat; import java.util.Date; import android.app.Activity; import android.content.ContentValues; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.os.Environment; import android.provider.MediaStore; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.ImageView; import android.widget.Toast; import android.widget.VideoView; public class PictureCameraActivity extends Activity { private static final int IMAGE_CAPTURE = 0; private static final int VIDEO_CAPTURE = 1; private Button startBtn; private Button videoBtn; private Uri imageUri; private Uri videoUri; private ImageView imageView; private VideoView videoView; /** Called when the activity is first created. * sets the content and gets the references to * the basic widgets on the screen like * {@code Button} or {@link ImageView} */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); imageView = (ImageView)findViewById(R.id.img); videoView = (VideoView)findViewById(R.id.videoView); startBtn = (Button) findViewById(R.id.startBtn); startBtn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { startCamera(); } }); videoBtn = (Button) findViewById(R.id.videoBtn); videoBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub startVideoCamera(); } }); } public void startCamera() { Log.d("ANDRO_CAMERA", "Starting camera on the phone..."); String fileName = "testphoto.jpg"; ContentValues values = new ContentValues(); values.put(MediaStore.Images.Media.TITLE, fileName); values.put(MediaStore.Images.Media.DESCRIPTION, "Image capture by camera"); values.put(MediaStore.Images.Media.MIME_TYPE, "image/jpeg"); imageUri = getContentResolver().insert( MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values); Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri); intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1); startActivityForResult(intent, IMAGE_CAPTURE); } protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == IMAGE_CAPTURE) { if (resultCode == RESULT_OK){ Log.d("ANDROID_CAMERA","Picture taken!!!"); imageView.setImageURI(imageUri); } } if (requestCode == VIDEO_CAPTURE) { if (resultCode == RESULT_OK) { Log.d("ANDROID_CAMERA","Video taken!!!"); Toast.makeText(this, "Video saved to:\n" + data.getData(), Toast.LENGTH_LONG).show(); videoView.setVideoURI(videoUri); } } } private void startVideoCamera() { // TODO Auto-generated method stub //create new Intent Log.d("ANDRO_CAMERA", "Starting camera on the phone..."); String fileName = "testvideo.mp4"; ContentValues values = new ContentValues(); values.put(MediaStore.Video.Media.TITLE, fileName); values.put(MediaStore.Video.Media.DESCRIPTION, "Video captured by camera"); values.put(MediaStore.Video.Media.MIME_TYPE, "video/mp4"); videoUri = getContentResolver().insert( MediaStore.Video.Media.EXTERNAL_CONTENT_URI, values); Intent intent = new Intent("android.media.action.VIDEO_CAMERA"); //Intent intent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, videoUri); intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1); // start the Video Capture Intent startActivityForResult(intent, VIDEO_CAPTURE); } private static File getOutputMediaFile() { // TODO Auto-generated method stub // To be safe, you should check that the SDCard is mounted // using Environment.getExternalStorageState() before doing this. File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory( Environment.DIRECTORY_PICTURES), "MyCameraApp"); // This location works best if you want the created images to be shared // between applications and persist after your app has been uninstalled. // Create the storage directory if it does not exist if (! mediaStorageDir.exists()){ if (! mediaStorageDir.mkdirs()){ Log.d("MyCameraApp", "failed to create directory"); return null; } } // Create a media file name String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date()); File mediaFile; mediaFile = new File(mediaStorageDir.getPath() + File.separator + "VID_"+ timeStamp + ".mp4"); return mediaFile; } /** Create a file Uri for saving an image or video */ private static Uri getOutputMediaFileUri(){ return Uri.fromFile(getOutputMediaFile()); } }

    Read the article

  • Excluding a specific substring from a regex

    - by Matt S
    I'm attempting to mangle a SQL query via regex. My goal is essentially grab what is between FROM and ORDER BY, if ORDER BY exists. So, for example for the query: SELECT * FROM TableA WHERE ColumnA=42 ORDER BY ColumnB it should capture TableA WHERE ColumnA=42, and it should also capture if the ORDER BY expression isn't there. The closest I've been able to come is SELECT (.*) FROM (.*)(?=(ORDER BY)) which fails without the ORDER BY. Hopefully I'm missing something obvious. I've been hammering in Expresso for the past hour trying to get this.

    Read the article

  • Efficient mapping of game entity positions in Java

    - by byte
    In Java (Swing), say I've got a 2D game where I have various types of entities on the screen, such as a player, bad guys, powerups, etc. When the player moves across the screen, in order to do efficient checking of what is in the immediate vicinity of the player, I would think I'd want indexed access to the things that are near the character based on their position. For example, if player 'P' steps onto element 'E' in the following example... | | | | | | | | | |P| | | | |E| | | | | | | | | ... would be to do something like: if(player.getPosition().x == entity.getPosition().x && entity.getPosition.y == thing.getPosition().y) { //do something } And thats fine, but that implies that the entities hold their positions, and therefor if I had MANY entities on the screen I would have to loop through all possible entities available and check each ones position against the player position. This seems really inefficient especially if you start getting tons of entities. So, I would suspect I'd want some sort of map like Map<Point, Entity> map = new HashMap<Point, Entity>(); And store my point information there, so that I could access these entities in constant time. The only problem with that approach is that, if I want to move an entity to a different point on the screen, I'd have to search through the values of the HashMap for the entity I want to move (inefficient since I dont know its Point position ahead of time), and then once I've found it remove it from the HashMap, and re-insert it with the new position information. Any suggestions or advice on what sort of data structure / storage format I ought to be using here in order to have efficient access to Entities based on their position, as well as Position's based on the Entity?

    Read the article

  • Display streaming video in desktop app.

    - by Roddy
    I have a Windows native desktop app (C++/Delphi), and I'm successfully using Directshow to display live video in it from a 'local' video capture device. The next thing I want to do is display video from a 'remote' capture device, streamed over the LAN. To stream the video, I guess I can use something like Expression Encoder or VLC, but I'm not sure what's the easiest way to receive/decode the streamed video. Inserting an ActiveX VLC or Flash player might be one option (although the licensing may be an issue then), but I was wondering if there's any way to achieve this with Directshow... Application needs to run on XP, and the video decoding should ideally be royalty free. Suggestions, please!

    Read the article

  • Problem with handling click event of bitmap

    - by shaina
    i am using navigationClick method to handle click event of Bitmap images & Buttons it works fine when i dont use VerticalFieldManger . But when i adjust positions of field using this field manager then it doesnt capture click event of all controls just on capture Index(0) click which i think is verticalFieldmanager not other (Bitmaps & Buttons fields) Plzzzzzz help how to resolve it code is protected boolean navigationClick(int status, int time) { Field field = this.getFieldWithFocus(); if(field.getIndex()==0) { Dialog.alert("Index 0 (bitmaP) clicked"); } if(field.getIndex()==1) { Dialog.alert("Index 1(Button 1) clicked"); } if(field.getIndex()==2) { Dialog.alert("Index 2 (Button2) Clicked"); } if(field.getIndex()==3) { Dialog.alert("Index 3(bitmap2) Clicked"); } }

    Read the article

  • How to convert X/Y position to Canvas Left/Top properties when using ItemsControl

    - by kshahar
    I am trying to use a Canvas to display objects that have "world" location (rather than "screen" location). The canvas is defined like this: <Canvas Background="AliceBlue"> <ItemsControl Name="myItemsControl" ItemsSource="{Binding MyItems}"> <ItemsControl.ItemTemplate> <DataTemplate> <Canvas> <TextBlock Canvas.Left="{Binding WorldX}" Canvas.Top="{Binding WorldY}" Text="{Binding Text}" Width="Auto" Height="Auto" Foreground="Red" /> </Canvas> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Canvas> MyItem is defined like this: public class MyItem { public MyItem(double worldX, double worldY, string text) { WorldX = worldX; WorldY = worldY; Text = text; } public double WorldX { get; set; } public double WorldY { get; set; } public string Text { get; set; } } In addition, I have a method to convert between world and screen coordinates: Point worldToScreen(double worldX, double worldY) { // return screen coordinates using the canvas properties and an internal MapData object } With the current implementation, the items are positioned in the wrong location, because their location is not converted to screen coordinates. How can I apply the worldToScreen method on the MyItem objects before they are added to the canvas?

    Read the article

  • Fiddler2 to debug SL

    - by Nair
    I have a SL3 + Ria service application that I want to trace what is are the calls made between the client and the server. Since I am debugging the application in localhost, I am unable to see any trace in fiddler. I tried http://localhost.:port/websitename/page.aspx and I got "The requested URL could not be retrieved" message. If i remove '.' between localhost and port my page show up but there is no fiddler capture. How would one go about to see/capture all the calls made between the client and service in localhost? Thanks,

    Read the article

  • C++ game loop example

    - by David
    Can someone write up a source for a program that just has a "game loop", which just keeps looping until you press Esc, and the program shows a basic image. Heres the source I have right now but I have to use SDL_Delay(2000); to keep the program alive for 2 seconds, during which the program is frozen. #include "SDL.h" int main(int argc, char* args[]) { SDL_Surface* hello = NULL; SDL_Surface* screen = NULL; SDL_Init(SDL_INIT_EVERYTHING); screen = SDL_SetVideoMode(640, 480, 32, SDL_SWSURFACE); hello = SDL_LoadBMP("hello.bmp"); SDL_BlitSurface(hello, NULL, screen, NULL); SDL_Flip(screen); SDL_Delay(2000); SDL_FreeSurface(hello); SDL_Quit(); return 0; } I just want the program to be open until I press Esc. I know how the loop works, I just don't know if I implement inside the main() function, or outside of it. I've tried both, and both times it failed. If you could help me out that would be great :P

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • as3 dynamic button name capturing problem

    - by user16458
    i am using this code below to dynamically capture the name of the button pressed and then playing the related balloon movie clip animation. stage.addEventListener(MouseEvent.CLICK, player); function player(evt:MouseEvent){ var nameofballoon = evt.target.name; nameofballoon =nameofballoon.substring(nameofballoon.length-1,nameofballoon.length); var movie = "balloon"+nameofballoon; trace(movie); movie.gotoAndPlay("burst"); } i'm getting this error even though the name of the clip capture by the event is correct TypeError: Error #1006: value is not a function. at Balloons2_fla::MainTimeline/player() any thoughts ? what wrong with this code?

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >