Search Results

Search found 13243 results on 530 pages for 'android camera'.

Page 430/530 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • what does the asterisk mean after a filename if you do ls -l

    - by James.Elsey
    I've done an ls -l inside a directory, and my files are displaying like this : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ll total 9512 drwxr-xr-x 3 james james 4096 2010-05-07 19:48 ./ drwxr-xr-x 6 james james 4096 2010-08-21 20:43 ../ -rwxr-xr-x 1 james james 341773 2010-05-07 19:47 adb* -rwxr-xr-x 1 james james 3636 2010-05-07 19:47 android* -rwxr-xr-x 1 james james 2382 2010-05-07 19:47 apkbuilder* -rwxr-xr-x 1 james james 3265 2010-05-07 19:47 ddms* -rwxr-xr-x 1 james james 89032 2010-05-07 19:47 dmtracedump* -rwxr-xr-x 1 james james 1940 2010-05-07 19:47 draw9patch* -rwxr-xr-x 1 james james 6886136 2010-05-07 19:47 emulator* -rwxr-xr-x 1 james james 478199 2010-05-07 19:47 etc1tool* -rwxr-xr-x 1 james james 1987 2010-05-07 19:47 hierarchyviewer* -rwxr-xr-x 1 james james 23044 2010-05-07 19:47 hprof-conv* -rwxr-xr-x 1 james james 1939 2010-05-07 19:47 layoutopt* drwxr-xr-x 4 james james 4096 2010-05-07 19:48 lib/ -rwxr-xr-x 1 james james 16550 2010-05-07 19:47 mksdcard* -rw-r--r-- 1 james james 205851 2010-05-07 19:48 NOTICE.txt -rw-r--r-- 1 james james 33 2010-05-07 19:47 source.properties -rwxr-xr-x 1 james james 1447936 2010-05-07 19:47 sqlite3* -rwxr-xr-x 1 james james 3044 2010-05-07 19:47 traceview* -rwxr-xr-x 1 james james 187965 2010-05-07 19:47 zipalign* What does that asterisk mean? I'm also unable to run a particular file, as follows : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ./emulator bash: ./emulator: No such file or directory EDIT : I'm trying to get Eclipse to use emulator, but it keeps complaining the files does not exist, yet it is here?

    Read the article

  • Eclipse grinds to a halt when building workspace

    - by Chris Thompson
    Hi all, This is a bit of a vague question because, frankly, I don't even know where to begin diagnosing the issue. My eclipse (Galileo) installation grinds to a complete halt when it's building the workspace -- to the point where I can't even type. I know the Android SDK I have installed is a major culprit because I can watch the memory usage go through the roof (through the built-in heap monitor) when the Android SDK content loader starts up. Every time I save a file though, the program just stops. The message at the bottom of the screen says Building workspace (74%) and sits there for about 30 or so seconds before completing and returning the performance to normal. I have a few other plugins installed (Maven, SVN, etc) but I'm assuming the main issue is Android. Has anybody had similar issues or any luck correcting this sort of problem? If there's anymore information you think would be helpful, just let me know...I didn't want to do a core dump on this question... I'm running it on Windows 7 64-bit for what it's worth. Thanks! Chris

    Read the article

  • Unzipping archives, preserving folder hierarchy

    - by Hydrangea
    I've got a problem and am not sure what it is, but hope someone can help me think this through because this has me stumped. Backstory: I wrote a Java app (Android) that unzips some zip files downloaded from the network. Until now, this was working great. Then, this week, the archives that I'm creating on my pc (in Ubuntu 12.04) unzip on the Android phone into a flat hierarchy instead of preserving the folders. I'm creating the archives the same way (right-click on folder compress) but even though my old archives (created in 10.04) still unzip as expected, the new ones don't. On Ubuntu, the new zip files look the same to me as the old ones. When unzipped on my pc the folders in these new archives are restored the same as the old ones... it's the Android app that extracts the old ones fine and the new ones flat. What I really want to know, though, is what the difference between the archives is. Question: How could one determine why one zip archive would be extracted with folder hierarchy preserved, when an identical one (to all appearances on Ubuntu 12.04) is extracted with no hierarchy? Are there different ways in which a .zip file can "have" folders, but Ubuntu doesn't distinguish between them?

    Read the article

  • How to set up Windows 7 Professional as a NAS

    - by Enyalius
    I searched and didn't find any answers, so please forgive me if this is a repeat. Anyway, I have an older computer that I'm using as an HTPC, and I was hoping that I could use it as a NAS/multimedia server, as well. My primary uses would include accessing content on my PS3 (same LAN), accessing content from other computers on my home network and (if I can) accessing content from my Android phone over the internet. I have used SubSonic to stream music to my Android phone and other computers before, but I would really like to find a way to do this natively if possible. I know that I can buy external hard disk cases that can plug in the USB port of my router, that I can get a Drobo or other network storage solution, but I would really just rather not spend the money (especially considering that I already have a computer that I should be able to use). Hardware involved: Apple AirPort Extreme base station router (most recent revision) Home Theater Personal Computer: Core 2 Duo @ 2.4GHz, 8GB DDR2 RAM, ~3.5TB hard drive space Sony Playstaiton 3 Thin 120GB HTC Thunderbolt (I have 4G coverage) rooted and running Android 2.2.1 Various Apple laptops Various Windows 7 desktops/laptops Thanks in advance! Note- I have looked at open source NAS software but I would like to preserve the Windows Media Center functionality in Windows 7, so other NAS software is not an option for me currently. .

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • HttpWebResponse with MJPEG and multipart/x-mixed-replace; boundary=--myboundary response content typ

    - by arri.me
    I have an ASP.NET application that I need to show a video feed from a security camera. The video feed has a content type of 'multipart/x-mixed-replace; boundary=--myboundary' with the image data between the boundaries. I need assistance with passing that stream of data through to my page so that the client side plugin I have can consume the stream just as it would if I browsed to the camera's web interface directly. The following code does not work: //Get response data byte[] data = HtmlParser.GetByteArrayFromStream(response.GetResponseStream()); if (data != null) { HttpContext.Current.Response.OutputStream.Write(data, 0, data.Length); } return;

    Read the article

  • WPF Memory Leak

    - by Oskar Kjellin
    I have an WPF form that I myself did not create, so I am not very good at WPF. It is leaking badly though, up to 400 MB and closing the form does not help. The problem lies in my application loading all the pictures at once. I would like to only load the ones visible at the moment. It is about 300 pictures and they are a bit large so my WPF-form suffers from loading them all. I have a DataTemplate with my own type that has a property Thumbnail. The code in the template is like this: <Image Source="{Binding Path=Thumbnail}" Stretch="Fill"/> And then I have a grid with a control that has the above template as source. The code for this control is the below. Please provide me with hints on how to optimize the code and perhaps get the only ones that are visible and only have that many controls loaded at the same time? <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Controls:ElementFlow"> <Grid Background="{TemplateBinding Background}"> <Canvas x:Name="PART_HiddenPanel" IsItemsHost="True" Visibility="Hidden" /> <Viewport3D x:Name="PART_Viewport"> <!-- Camera --> <Viewport3D.Camera> <PerspectiveCamera FieldOfView="60" Position="0,1,4" LookDirection="0,-1,-4" UpDirection="0,1,0" /> </Viewport3D.Camera> <ContainerUIElement3D x:Name="PART_ModelContainer" /> <ModelVisual3D> <ModelVisual3D.Content> <AmbientLight Color="White" /> </ModelVisual3D.Content> </ModelVisual3D> <Viewport2DVisual3D RenderOptions.CachingHint="Cache" RenderOptions.CacheInvalidationThresholdMaximum="2" RenderOptions.CacheInvalidationThresholdMinimum="0.5"/> </Viewport3D> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • JMF Registry could not add item (Vista)

    - by Ranch
    Hello, I'm using JMF to capture video stream (webcam) on my Java project. The camera I'm using is recognized by JMF (JFMStudio) and I manage to get the video stream. However, on JMF Registry Editor - there's a list of available capture devices, when I click "Add" on one of the items (including the one I need) I get a "Could not add item" error. Therefore this camera is not set in the registry of it (god know where) and later on it is not recognized by my project: RGBFormat fmt = new RGBFormat(); // could be YUV - doesn't matter Vector v = CaptureDeviceManager.getDeviceList(fmt); v is empty (while I know the video source is recognized by Java , since I manage to get video on JMF Studio). Now, this happens only on Vista (where else) and not on XP. I have a suspiction that somehow Vista security blocks Java from writing the registry file, but of course, I might be wrong. One more comments: this is vfw:Microsoft WDM Image Capture(Win32):0 device. Any idea ?

    Read the article

  • iPhone application-Memory handling issues

    - by Vin
    Hi All, I am having some memory management issues in my app. Maybe someone may help me out here. 1) While checking for leaks in intruments, when I deploy and run the app on device, the virtual memory utilized, starts from 50 MB(even though i've just launched the app and am on the first screen). My resources contribute to 2.6 MB of it and I don't know what is contributing for the rest. What is the ideal utilization of virtual memory for an app? 2) In certain screen of the app, user is allowed to click a picture from the camera. In Instruments, I observe that virtual memory utilization jumps around 20MB, on the invocation of camera. Is it normal and can it be decreased? Looking forward to hear a reply soon. Thanks in advance

    Read the article

  • samsung HMX-H100P camcorder and video encoding with mencoder

    - by jskg
    Hi everyone, my background is totally not related to video stuff so pardon my newbie style. I own a samsung HMX-H100P camcorder and I'm trying to encode videos to be uploaded to Youtube and Vimeo. First problem: videos generated by the camera with no processing appear like this: http://www.youtube.com/watch?v=AANbl_DTuzE when I play them with Totem(Linux) or VideoLan. Second problem: When I try to encode the videos produced by the camera using mencoder I get the video at the resolution I chose but those ugly lines and lagging are still present. Here's the command I use: mencoder $inputFile -aspect 16:9 -of lavf -lavfopts format=psp -oac lavc -ovc lavc -lavcopts aglobal=1:vglobal=1:coder=0:vcodec=libx264:acodec=libfaac:vbitrate=4500:abitrate=128 -vf scale=1280:720 -ofps 25000/1001 -o $outputFile Any ideas? Thanks in advance

    Read the article

  • Displaying Video using a Window Handle

    - by fergs
    I'm working on a C# wrapper for Dallmeier camera's and currently have a working wrapper. I can connect to a camera via passing the window handle (in my application its a picture box handle), this is used to send video and messages. Once connected I can then send the StartLiveView command and then a live stream video will be shown in the picture box. Can someone explain how this works by just giving the window handle? And how can I grab an Image from this stream when Picturebox1.Image is null?

    Read the article

  • Video packet capture over multiple IP cameras

    - by nimals1986
    Hello We are working on a C language application which is simple RTSP/RTP client to record video from Axis a number of Cameras . We launch a pthread for each of the camera which establishes the RTP session and begins to record the packets captured suing the recvfrom() call... A single camera single pthread records fine for well over a day without issues.. but testing with more cameras available,about 25(so 25 pthreads), the recording to file goes fine for like 15 to 20 mins and then the recording just stops ..the application still keeps running .. Its been over a month and a half we have been trying with varied implementations but nothing seems to help .. Please provide suggestions.. We are using CentOS 5 platform

    Read the article

  • Computing orientation of a square and displaying an object with the same orientation

    - by Robin
    Hi, I wrote an application which detects a square within an image. To give you a good understanding of how such an image containing such a square, in this case a marker, could look like: What I get, after the detection, are the coordinates of the four corners of my marker. Now I don't know how to display an object on my marker. The object should have the same rotation/angle/direction as the marker. Are there any papers on how to achieve that, any algorithms that I can use that proofed to be pretty solid/working? It doesn't need to be a working solution, it could be a simple description on how to achieve that or something similar. If you point me at a library or something, it should work under linux, windows is not needed but would be great in case I need to port the application at some point. I already looked at the ARToolkit but they you camera parameter files and more complex matrices while I only got the four corner points and a single image instead of a whole video / camera stream.

    Read the article

  • How to spot empty parking spaces?

    - by mithila
    I want to do a final year B.Sc project on parking space detection. Can anybody give me some link related to it? Any text-book, tutorial or anything? What would be prerequisite for this project? What type of skills(programming/math) are needed? What are the initial steps to do? What type of readings(algorithms of image processing) are needed? Detail added in comments: i'm going to use camera, not infrared. i would like to use still images, or one camera which captures images from a parking lot. i think reak-time processing will be tough, at this moment i just need to start the project. so still images will work fine. but later it may b a real-time project

    Read the article

  • How to efficiently track my geolocation during traveling using iPhone

    - by Peter Kruithof
    I'm going to travel through Thailand and I want to keep track of my location to geotag photos afterwards taken with a digital camera (iPhone's camera is not good enough). There are two things that are important here: I don't want to update manually I want the battery to last as long as possible, since the times I will be able to charge will be scarce I've thought about creating a web page that periodically sends my geolocation to a script that stores it in a database, but I don't know if GPS data is available in Mobile Safari. Second, I want the data I send to be as small as possible, and the frequency this is done s few as possible, because of the pricing of mobile data usage abroad. Any suggestions what would be a good solution here?

    Read the article

  • Streaming webcam video in Flash using MP4 encoding

    - by Herms
    One of the features of the Flash app I'm working on is to be able to stream a webcam to others. We're just using the built-in webcam support in Flash and sending it through FMS. We've had some people ask for higher quality video, but we're already using the highest quality setting we can in Flash (setting quality to 100%). My understanding is that in the newer flash players they added support for MPEG-4 encoding for the videos. I created a simple test Flex app to try and compare the video quality of the MP4 vs FLV encodings. However, I can't seem to get MP4 to work at all. According to the Flex documentation the only thing I need to do to use MP4 instead of FLV is prepend "mp4:" to the name of the stream when calling publish: Specify the stream name as a string with the prefix mp4: with or without the filename extension. The prefix indicates to the server that the file contains H.264-encoded video and AAC-encoded audio within the MPEG-4 Part 14 container format. When I try this nothing happens. I don't get any events raised on the client side, no exceptions thrown, and my logging on the server side doesn't show any streams starting. Here's the relevant code: // These are all defined and created within the class. private var nc:NetConnection; private var sharing:Boolean; private var pubStream:NetStream; private var format:String; private var streamName:String; private var camera:Camera; // called when the user clicks the start button private function startSharing():void { if (!nc.connected) { return; } if (sharing) { return; } if(pubStream == null) { pubStream = new NetStream(nc); pubStream.attachCamera(camera); } startPublish(); sharing = true; } private function startPublish():void { var name:String; if (this.format == "mp4") { name = "mp4:" + streamName; } else { name = streamName; } //pubStream.publish(name, "live"); pubStream.publish(name, "record"); }

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

  • Tool to monitor HTTP traffic

    - by Samuh
    I have an application on my iPhone which sends out Http requests; is it possible to look into the HTTP stream using some tool?? I use standalone version of (IEInspector's) HttpAnalyzer tool on my windows PC to monitor HTTP traffic from all processes including the apps on Android phone (thanks to android debug bridge interface). Is there a similar tool for OS X that I can use for iPhone apps? Is this even allowed? Thanks in advance.

    Read the article

  • is any drag and drop facility in Black Berry

    - by sairam333
    Hi all i am new to black berry.I have Experience on Android .Now i want to learn Black berry .In android an application contain res folder in that we add the lay out in xml form and we can create the forms easily using views and lay outs. In the same way is there any facility in black berry?please give me some suggestions to prepare forms in black berry .

    Read the article

  • Comparing images using SIFT

    - by Luís Fernando
    I'm trying to compare 2 images that are taken from a digital camera. Since there may be movement on the camera, I want to first make the pictures "match" and then compare (using some distant function). To match them, I'm thinking about cropping the second picture and using SIFT to find it inside the first picture... it will probably have a small difference on scale/translation/rotation so then I'd need to find the transformation matrix that converts image 1 to image 2 (based on points found by SIFT) any ideas on how to do that (or I guess that's a common problem that may have some opensource implementation?)? thanks

    Read the article

  • Xcode: Application name in OS X cannot be localized?

    - by Andrew Chang
    I have an project named "Multi-Camera Supervisor". I make the "MainMenu.xib" file localized. Here are the menu bar in localized nib file of Xcode: For English: For Japanese: But when I ran my application in Xcode, The first item doesn't work. Here are the menu bars when my application ran: For English: For Japanese You can see that the application name was still "Multi-Camera Supervisor". Meanwhile, the application name appeared in Dock icon was not localized either. How should I solve this? How can I localize the application name not only in main menu but also in Dock?

    Read the article

  • Compressing three individual jpeg pics containing temporal redundancy?

    - by michael
    I am interfacing and embedded device with a camera module that returns a single jpeg compressed frame each time I trigger it. I would like to take three successive shots (approx 1 frame per 1/4 second) and further compress the images into a single file. The assumption here is that there is a lot of temporal redundancy, therefore lots of room for more compression across the three frames (compared to sending three separate jpeg images). I will be implementing the solution on an embedded device in C without any libraries and no OS. The camera will be taking pics in an area with very little movement (no visitors or screens in the background, maybe a tree with swaying branches), so I think my assumption about redundancy is pretty solid. When the file is finally viewed on a pc/mac, I don't mind having to write something to extract the three frames (so it can be a nonstandard cluge) So I guess the actual question is: What is the best way to compress these three images together given the fact that they are already in JPEG format (it is a possibly to convert back to a raw image, but if i dont have too...)

    Read the article

  • Unreachable code detected by using const variables

    - by Anton Roth
    I have following code: private const FlyCapture2Managed.PixelFormat f7PF = FlyCapture2Managed.PixelFormat.PixelFormatMono16; public PGRCamera(ExamForm input, bool red, int flags, int drawWidth, int drawHeight) { if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono8) { bpp = 8; // unreachable warning } else if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono16){ bpp = 16; } else { MessageBox.Show("Camera misconfigured"); // unreachable warning } } I understand that this code is unreachable, but I don't want that message to appear, since it's a configuration on compilation which just needs a change in the constant to test different settings, and the bits per pixel (bpp) change depending on the pixel format. Is there a good way to have just one variable being constant, deriving the other from it, but not resulting in an unreachable code warning? Note that I need both values, on start of the camera it needs to be configured to the proper Pixel Format, and my image understanding code needs to know how many bits the image is in. So, is there a good workaround, or do I just live with this warning?

    Read the article

  • main.out.xml error

    - by husainsn
    Hello I installed Eclipse and Android ADT. When I create android project and try to run, I get following on main.out.xml file: This document is empty. Right click here to insert content However, main.xml file has xml data for layout. Please help what I need to do after right clicking. Thanks

    Read the article

  • Trouble installing Rhodes framework

    - by user94154
    When I run rake run:android, I get the error (I'm using Ubuntu): Your java bin folder does not appear to be on your path. This is required to use rhodes. Here is the relevant part of my bash.bashrc file: export PATH="$PATH:$HOME/ruby/gems/bin" export GEM_HOME="$HOME/ruby/gems" export GEM_PATH="$GEM_HOME:/usr/lib/ruby/gems/1.8" export GEM_CACHE="$GEM_HOME/cache" export RUBYOPT=rubygems export ANDROID_HOME="/home/username/ruby_files/android-sdk-linux_86" PATH="$PATH:$ANDROID_HOME/tools" export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0_21 export JAVA_HOME PATH=$PATH:$JAVA_HOME/bin export PATH

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >