Search Results

Search found 1303 results on 53 pages for 'voice recognition'.

Page 43/53 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • How can I automatically edit an email before auto-forwarding it?

    - by Miss Cellanie
    Is there a way to automatically edit emails before forwarding them? I'm getting email notifications from Foursquare that I want to send to my phone as text messages. I know how to send messages to my number using an email address (I'm in the US and use Verizon) but I don't know how to strip out any unnecessary formatting, like HTML, before the email gets sent. What I want: Ability to strip out HTML Ability to start forwarding at a specific part of the email based on a search (e.g., I might know that Foursquare starts their messages with "Hey hey!" and only want content after that phrase occurs) Ability to truncate at 160 characters Things I've tried: I'm not using Foursquare DM pings through Twitter because I have two Twitter accounts and Twitter only allows a phone to be linked to one account at a time. I'm not willing to change which account it's linked to. I tried to work around the Twitter limitation using Google Voice, but they don't support SMS short codes. I'll compromise on the features I want if I can find a free solution that doesn't require me to set up my own server. I do think this is computer related because it will happen on my computer, not on my phone. edit My current setup: Gmail in Firefox 3.0.15 on Windows XP. I use a netbook as my only personal computer. However, if the only way to accomplish this well is to set up my own mail server or something, I would still want to know that.

    Read the article

  • How do I setup routing for two companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: Two companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants its own Internet connection (ISP A) and company B wants its own Internet connection (ISP B). VLANs are deployed internally to separate the two companies' networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I set up policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • [SOLVED} How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. Update: Here's a sample of how I hear others people voices.. Audio Sample After some tests, also the desktop has the same problem. (I tried TeamSpeak3) Here are some details on my laptop and desktop Laptop Dell Studio 1555 Core 2 Duo P8600 2.4Ghz 4Gb Ram Dual Channel Ati HD 4570 512Mb dedicated (up to 2048) IDT High Definition Audio Desktop Motherboard Asus P5KPL-AM Dual Core CPU E5200 2.50Ghz 2x2GB PC6400 Dual Channel Ati Radeon HD 4650 512MB VIA High Definition Audio Both computers have Windows 7 Professional 64Bit. So how do I restore my audio? SOLVED The problem was in router firmware, there was a bug that recognized VoIP traffic as a DOS attack and the router grambled every packet... I've installed the newest firmware and everything is fine :)

    Read the article

  • Create a wifi hotspot in a place where an authentication is required [closed]

    - by SoftTimur
    I live in a residence where Internet is provided via cable. Once the computer is connected to the cable, launching a browser will trigger an authentication, I have a username and password to enter, then the internet will be connected. With a gateway (e.g. Wireless Cable Voice Gateway Model CBVG834G) and 2 cables, two PCs can connect to the Internet with my account at the same time. Now the question is, I don't like the cable, and would like to create a wifi hotspot. It seems realizable with the same gateway. According to the instruction on page 2-4 of the manual: Enter http://192.168.0.1 in the address field of your Internet browser. Log in to the gateway with either of the default user names, MSO or admin... However, trying to open 192.168.0.1 gives me an error on the browser. Does anyone know what happened? Is it due to the authentication required by my residence? Is there any other way to build a hotspot of wifi? PS: My system is MAC OS

    Read the article

  • How to find Stolen MacBook with iCloud

    - by user1518089
    My MacBook Air was stolen about 6 weeks ago. Through iCloud and "Find Phone", I have some pictures and a location down to about 2 blocks. The pictures are from the current user taking photos which automatically appear on my local devices. (Yes they probably saw my pictures until I stopped taking them. Yes, they are stupid.) I was thinking about going there and hanging out until I recognized the current users, but it is in a very bad neighborhood and I would be noticed. The police have not done anything. Yes, the MacBook can be locked or a message sent. I am hoping to get it back. Does anyone have ideas on how to track them down? While Find Phone shows their location, it does not report an ip address. Is there a way to get an ip address? Does Facebook face recognition work on strangers? Come on tech geniuses, help me play detective. It does not have Drop Box installed.

    Read the article

  • What apps can you only get on Mac and not Windows?

    - by ytk
    What apps do you absolutely have to use a Mac to run, and there are no decent Windows PC equivalent? This is not a religious war. Please be specific and practical It doesn't have to be a direct 1-2-1 comparison, but overall usefulness to the task I'll start off with a few: KeyNote -- the animations are quite cool and not available in PowerPoint iTune's photo sync -- on Windows it makes copy of all the photos you want to sync, effectively double the space taken up by your photos. On a Mac it's easier as long as you use iPhoto Keychain -- a centralized password manager tied to the OS. The benefit of this is you don't have to set a Master Password (like Firefox) which you need to enter when starting the browser. And it doesn't reveal your password (like Chrome, which makes no effort in hiding the password you have stored in Options) Time Machine -- 0-configuration backup in the background. Easy interface for restoring a file, or even just a contact in the address book. Text-to-speech -- works in any program, and sounds better than Windows computer voice Quick View -- press space bar to preview a file. Windows95 had quick view, but was removed.

    Read the article

  • What are some good/reputable/widely-used libraries written in VB.NET?

    - by Dan Tao
    Generally speaking, when VB.NET and C# are compared, there is a lot of strong support for C#, accompanied by some bashing of VB.NET until a respected developer comes along and acts as The Voice Of Reason, pointing out that while VB prior to VB.NET had its fair share of issues, VB.NET is really a very strong, fully OOP language that is, feature-wise, right about on par with C# (with the exception of certain things like a full-bodied lamba syntax [pre-VB10] or the yield keyword, as many C# faithfuls are quick to point out). I myself, having written plenty of code in both VB.NET and C#, fall squarely in the "I prefer C#, but don't consider VB.NET any less of a language" camp. However, one thing I have noticed is that when it comes to respected and/or widely-used libraries for .NET, everyting is written in C#. Or at least that's been my impression. This strikes me as a little strange because, aside from the abovementioned sprinkling of nice features (in particular the yield keyword), I tend to view the VB.NET/C# divide as primarily a matter of personal taste. Obviously, plenty of developers prefer C#. But I personally know some developers (good ones) who prefer VB.NET, which would lead me to suspect that surely some libraries (good ones) would be written in VB.NET. They must be out there, and I just haven't found them. What are some good libraries that've been written in VB.NET? The best would be open source, as that would allow interested developers to take a look at some good VB.NET code and see how effective the language can be when used properly. But I'd be interested to know about any libraries at all, particularly reputable ones.

    Read the article

  • A tool to aid completion of missing or incomplete source code documentation

    - by Pekka
    I have several finished, older PHP projects with a lot of includes that I would like to document in javadoc/phpDocumentor style. While working through each file manually and being forced to do a code review alongside the documenting would be the best thing, I am, simply out of time constraints, interested in tools to help me automate the task as much as possible. The tool I am thinking about would ideally have the following features: Parse a PHP project tree and tell me where there are undocumented files, classes, and functions/methods (i.e. elements missing the appropriate docblock comment) Provide a method to half-way easily add the missing docblocks by creating the empty structures and, ideally, opening the file in an editor (internal or external I don't care) so I can put in the description. Optional: Automatic recognition of parameter types, return values and such. But that's not really required. The language in question is PHP, though I could imagine that a C/Java tool might be able to handle PHP files after some tweaking. Looking forward to your suggestions! Bounty There are already very good suggestions for this (that I have not yet had the time to check out) to point out the gaps, but none yet providing aid in filling them. I want to give the question some more exposure, maybe there is some sort of a graphical extension to php_codesniffer to achieve the level of automation I'm dreaming of. Looking forward to any additional input!

    Read the article

  • Modify python USB device driver to only use vendor_id and product_id, excluding BCD

    - by Tony
    I'm trying to modify the Android device driver for calibre (an e-book management program) so that it identifies devices by only vendor id and product id, and excludes BCD. The driver is a fairly simply python plugin, and is currently set up to use all three numbers, but apparently, when Android devices use custom Android builds (ie CyanogenMod for the Nexus One), it changes the BCD so calibre stops recognizing it. The current code looks like this, with a simple list of vendor id's, that then have allowed product id's and BCD's with them: VENDOR_ID = { 0x0bb4 : { 0x0c02 : [0x100], 0x0c01 : [0x100]}, 0x22b8 : { 0x41d9 : [0x216]}, 0x18d1 : { 0x4e11 : [0x0100], 0x4e12: [0x0100]}, 0x04e8 : { 0x681d : [0x0222]}, } The line I'm specifically trying to change is: 0x18d1 : { 0x4e11 : [0x0100], 0x4e12: [0x0100]}, Which is, the line for identifying a Nexus One. My N1, running CyanogenMod 5.0.5, has the BCD 0x0226, and rather than just adding it to the list, I'd prefer to eliminate the BCD from the recognition process, so that any device with vendor id 0x18d1 and product id 0x4e11 or 0x4e12 would be recognized. The custom Android rom doesn't change enough for the specifics to matter. The syntax seems to require the BCD in brackets. How can I edit this so that it matches anything in that field?

    Read the article

  • OCR: How to improve accuracy - existing libraries for removing non-text 'furniture', shapes, etc to

    - by Rob
    I want to remove rectangles etc that enclose text in a screenshot image, so that I can perform optical character recognition to get accurate text from the screenshot. Background: I doing this to extract data from a legacy application for use with other applications. This is the only way to get at this data as associated files are in a closed, proprietary, binary format. I will be using AutoItScript to drive the application to show data in its UI, then I will screenshot this and feed this to tesseract. I've already had some success in automating the UI, and have been able to use tesseract to get plain ascii text out of the bitmap. There are several AutoItScripr forum articles discussing its use with tesseract/OCR but not specifically for my question. http://www.autoitscript.com/forum/index.php?s=6c32c3ece12756e635a619cdf175eff9&showforum=2 What I need to do There are thin, 1-pixel wide rectangles that closely enclose some text, when fed to tesseract, it sees them as I for example for a verticle line of the rectangle. Any thoughts on how to remove the rectangles, or best practices? I'm asking if there is a generic command line based toolset to overwrite rectangles, for example, in .png files. I could then pass the .png through this, then pass it to tesseract. Details on the tesseract release/setup I've used are as follows: Go here: http://code.google.com/p/tesseract-ocr/downloads/list - For the basic english generic character set to get Tesseract up and running and recognising your bitmapped text into ascii text, use tesseract-2.00.eng.tar.gz (current version at time of writing is: "English language data for Tesseract (2.00 and up) Jul 2007 989 KB 84845") Related questions I have already looked at on Stack Overflow http://stackoverflow.com/questions/1335581/how-to-give-best-chance-of-success-to-an-ocr-software http://stackoverflow.com/questions/2296568/analysis-and-transformation-of-the-image-on-the-basis-of-this-analysis-for-better http://stackoverflow.com/questions/2268028/reading-characters-off-of-the-screen In these, my question is not completely answered or a commercial solution is being sold. I do not want to consider a commercial solution at this stage.

    Read the article

  • iPhone Audio Queue Service sample units

    - by pion
    I am looking at Audio Queue Services document specifically on the following code: // Writing an audio queue buffer to disk AudioFileWritePackets ( // 1 pAqData->mAudioFile, // 2 false, // 3 inBuffer->mAudioDataByteSize, // 4 inPacketDesc, // 5 pAqData->mCurrentPacket, // 6 &inNumPackets, // 7 inBuffer->mAudioData // 8 ); inBuffer-mAudioDataByteSize is the number of bytes of audio data being written. inBuffer-mAudioData is the new audio data to write to the audio file. Assuming the sample rate is 44100. AudioStreamBasicDescription mDataFormat; mDataFormat.mSampleRate = 44100.0f; mDataFormat.mBitsPerChannel = 16; ... NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2; SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData; I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice. Beginner questions: What does the audioSample represent? What am I looking at here? What is the unit of audioSample? What do I need to do if I just want to plot the range between 50 - 100 Hz? Thanks in advance for your help.

    Read the article

  • C# Windows Mobile 6.5 and TCP connections

    - by Phillip
    Hello, I am developing an application which makes a TCP connection out to our company server to pull down data and provide real-time data updates when the information changes. I am using the .NET Compact Framework for the development and the .NET Framework 3.5 (soon to update to 4.0) for the server-side TCP connection. I want to leave the connection open after the initial data is sent to the device from the server in order to keep the server in contact with the device should data updates need to be sent to the device. We already considered doing a WCF or connect/disconnect type of connection but we believe the overhead on the server for creating the session, transmitting and session cleanup would be unacceptable. (each device would be connecting every 60-90 seconds.) So, leaving the connection open is the best option. What I need to know is, when I leave the TCP connection open, do I need to manually transmit a heartbeat (and if so how do I do that with the .NET Compact Framework) or will the framework/stack do that for me? We have code that allows up to reconnect if the device gets disconnected (from network switching or a voice call) so that is handled. Thanks,

    Read the article

  • How to parse the "<media:group>" using feedparser?

    - by Wayle.C
    The rss file is shown as below, i want to get the content in section media:group . I check the document of feedparser, but it seems not mention this. How to do it? Any help is appreciated. XYZ InfoX: Special hello http://www1.XYZInfoX.com/learninghello/home hello en Wed, 17 Mar 2010 08:50:06 GMT 2010-03-17T08:50:06Z en Voice of America http://www1.XYZInfoX.com/learninghello http://media.XYZInfoX.com/designimages/XYZRSSIcon.gif <item> <title>Who Were the Deadliest Gunmen of the Wild West?</title> <link>http://www1.XYZInfoX.com/learninghello/home/Deadliest-Gunmen-of-the-Wild-West-87826807.html</link> <description> The story of two of them: "Killin'" Jim Miller was an outlaw, "Texas" John Slaughter was a lawman | EXPLORATIONS </description> <pubDate>Wed, 17 Mar 2010 00:38:48 GMT</pubDate> <guid isPermaLink="false">87826807</guid> <dc:creator></dc:creator> <dc:date>2010-03-17T00:38:48Z</dc:date> *<media:group> <media:content url="http://media.XYZInfoX.com/images/archives_peace_comm_480_16mar_se.jpg" medium="image" isDefault="true" height="300" width="480" /> <media:content url="http://media.XYZInfoX.com/images/archives_peace_comm_230_16mar_se_edited-1.jpg" medium="image" isDefault="false" height="230" width="230" /> <media:content url="http://media.XYZInfoX.com/images/tex_trans_lawmans_230_16mar10_se.jpg" medium="image" isDefault="false" height="230" width="230" /> <media:content url="http://www.XYZInfoX.com/MediaAssets2/learninghello/dalet/se-exp-outlaws-part2-17mar2010.Mp3" type="audio/mpeg" medium="audio" isDefault="false" /> </media:group>* </item>

    Read the article

  • toggling proximity sensor on iPhone loses an event

    - by slugolicious
    I'm using setProximitySensingEnabled and implemented proximityStateChanged in my UIApplication subclass. It looks like if sensing is toggled, that the first "off" event is being lost. My UIApplication class is pretty basic... -(void)proximityStateChanged:(BOOL)state { NSLog(state ? @"ON" : @"OFF"); } In my application delegate, I have a UISwitch that enables/disables the proximity sensor. -(IBAction)toggleProxy:(id)sender { [UIApplication sharedApplication].proximitySensingEnabled = prox.on; } "prox" is my UISwitch. The test works fine when it first starts. I tap the switch to turn it on and then put my hand over the sensor for a second then move it away and get: 2009-03-11 12:43:00.465 Proximity[324:20b] ON 2009-03-11 12:43:02.514 Proximity[324:20b] OFF 2009-03-11 12:43:04.046 Proximity[324:20b] ON 2009-03-11 12:43:05.621 Proximity[324:20b] OFF I then tap the switch to turn it off then tap again to turn it on. Now I get: 2009-03-11 12:43:12.005 Proximity[324:20b] ON 2009-03-11 12:43:14.789 Proximity[324:20b] ON 2009-03-11 12:43:16.467 Proximity[324:20b] OFF 2009-03-11 12:43:17.516 Proximity[324:20b] ON 2009-03-11 12:43:19.077 Proximity[324:20b] OFF Notice I get two ON's before an OFF. The OFF is lost somewhere. I can't replicate this behavior using Google's mobile app so I'm wondering if they're resetting something in between proximity enabling. They don't have the proximity sensor on all the time because if you cover the sensor, the screen doesn't go blank. You have to tilt the phone up and angle it back (to simulate the position it would be in at your ear) and then covering the sensor works. Anyone else playing with the sensor? In my particular app, I'm recording a voice message and when you move the phone away from your ear, I want to pause the recording (when I get an OFF). The first time I move the phone away from my ear, the recording is not paused. However, if I put it to my ear and move it away again, it is paused.

    Read the article

  • Beginner video capture and processing/Camera selection

    - by mattbauch
    I'll soon be undertaking a research project in real-time event recognition but have no experience with the programming aspect of video capture (I'm an upperclassman undergraduate in computer engineering). I want to start off on the right foot so advice from anyone with experience would be great. The ultimate goal is to track events such as a person standing up/sitting down, entering/leaving a room, possibly even shrugging/slumping in posture, etc. from a security camera-like vantage point. First of all, which cameras/companies would you recommend? I'm looking to spend ~$100, more if necessary but not much. Great resolution isn't a must, but is desirable if affordable. What about IP network cameras vs. a USB type webcam? Webcams are less expensive, but IP cameras seem like they'd be much less work to deal with in software. What features should I look for in the camera? Once I've selected a camera, what does converting its output to a series of RGB bitmaps entail? I've never dealt with video encoding/decoding so a starting point or a tutorial that will guide me up to this point would be great if anyone has suggestions. Finally, what is the best (least complicated/most efficient) way to display video from the camera plus my own superimposed images (boxes around events in progress, for instance) in a GUI application? I can work on any operating system in any language. I have some experience with win32 GUIs and Java GUIs. The focus of the project is on the algorithm and so I'm trying to get the video capture/display portion of the app done cleanly and quickly. Thanks for any responses!!

    Read the article

  • Android - need UI help/advice

    - by Donal Rafferty
    I have been working on Android for the past couple of months getting to know how various components work. One area I am completely lacking in knowledge is any sort of User Interface or graphical interface creation. As an excercise I have been asked to break down the HTC call screen into what components it contains and rebuild as close as possible. Here is a picture of the HTC call screen: From my understanding the above UI has a custom title bar where "Meteor" and the call time appears. Then the main image in the middle block along with a text view showing the called party, in this case "Voice Mail" and the number. The bottom is then a custom view maybe with three custom buttons used within it. Would I be correct in my above assumptions? So the parts I should look into start programming are a custom title bar and a custom view with three custom buttons to place at the bottom? What layout would be reccomended? I hope this question is seen as relative to Stack Overflow, if it is not then I will delete it. Thanks in advance

    Read the article

  • Parsec: backtracking not working

    - by Nathan Sanders
    I am trying to parse F# type syntax. I started writing an [F]Parsec grammar and ran into problems, so I simplified the grammar down to this: type ::= identifier | type -> type identifier ::= [A-Za-z0-9.`]+ After running into problems with FParsec, I switched to Parsec, since I have a full chapter of a book dedicated to explaining it. My code for this grammar is typeP = choice [identP, arrowP] identP = do id <- many1 (digit <|> letter <|> char '.' <|> char '`') -- more complicated code here later return id arrowP = do domain <- typeP string "->" range <- typeP return $ "("++domain++" -> "++range++")" run = parse (do t <- typeP eof return t) "F# type syntax" The problem is that Parsec doesn't backtrack by default, so > run "int" Right "int" -- works! > run "int->int" Left "F# type syntax" unexpected "-" expecting digit, letter, ".", "`" or end of input -- doesn't work! The first thing I tried was to reorder typeP: typeP = choice [arrowP, identP] But this just stack overflows because the grammar is left-recursive--typeP never gets to trying identP because it keeps trying arrowP over and over. Next I tried try in various places, for example: typeP = choice [try identP, arrowP] But nothing I do seems to change the basic behaviours of (1) stack overflow or (2) non-recognition of "-" following an identifier. My mistake is probably obvious to anybody who has successfully written a Parsec grammar. Can somebody point it out?

    Read the article

  • Can a single developer still make money with shareware?

    - by Wouter van Nifterick
    I'm wondering if the shareware concept is dead nowadays. Like most developers, I've built up quite a collection of self-made tools and code libraries that help me to be productive. Some examples to give you an idea of the type of thing I'm talking about: A self-learning program that renames and orders all my mp3 files and adds information to the id3 tags; A Delphi component that wraps the Google Maps API; A text-to-singing-voice converter for musical purposes; A program to control a music synthesizer; A Gps-log <- KML <- ESRI-shapefile converter; I've got one of these already freely downloadable on my website, and on average it gets downloaded about a 150 times per month. Let's say I'd start charging 15 euro's for it; would there actually be people who buy it? How many? What would it depend on? If I could get some money for some of these, I'd finish them up a bit and put them online, but without that, I probably won't bother. Maintaining a SourceForge project is not very rewarding by itself. Is there anyone who is making money with shareware? How much? Any tips?

    Read the article

  • how to better (inambiguaously) use the terms CAPTCHA and various types of interactions?

    - by vgv8
    I am working on survey of state-of-the-art and trends of spam prevention techniques. I observe that non-intrusive, transparent to visitor spam prevention techniques (like context-based filtering or honey traps) are frequently called non-captcha. Is it correct understanding of term CAPTCHA which is "type of challenge-response [ 2 ]test used in computing to ensure that the response is not generated by a compute" [ 1 ] and challenge-response does not seem to imply obligatory human involvement. So, which understanding (definition) of term and classification I'd better to stick with? How would I better call CAPTCHA without direct human interaction in order to avoid ambiguity and confusion of terms understnding? How would I better (succinctly and unambiguously) coin the term for captchas requiring human interaction but without typing into textbox? How would I better (succinctly and unambiguously) coin the terms to mark the difference between human interaction with images (playing, drag&dropping, rearranging, clicking with images) vs. just recognizing them (and then typing into a textbox the answer without interaction with images)? PS. The problem is that recognition of a wiggled word in an image or typing the answer to question is also interaction and when I start to use the terms "interaction", "interactive", "captcha", "protection", "non-captcha", "non-interactive", "static", "dynamic", "visible", "hidden" the terms overlap ambiguously with which another (especailly because the definitions or their actual practice of usage are vague or contradictive). [ 1 ] http://en.wikipedia.org/wiki/CAPTCHA

    Read the article

  • detecting pauses in a spoken word audio file using pymad, pcm, vad, etc

    - by james
    First I am going to broadly state what I'm trying to do and ask for advice. Then I will explain my current approach and ask for answers to my current problems. Problem I have an MP3 file of a person speaking. I'd like to split it up into segments roughly corresponding to a sentence or phrase. (I'd do it manually, but we are talking hours of data.) If you have advice on how to do this programatically or for some existing utilities, I'd love to hear it. (I'm aware of voice activity detection and I've looked into it a bit, but I didn't see any freely available utilities.) Current Approach I thought the simplest thing would be to scan the MP3 at certain intervals and identify places where the average volume was below some threshold. Then I would use some existing utility to cut up the mp3 at those locations. I've been playing around with pymad and I believe that I've successfully extracted the PCM (pulse code modulation) data for each frame of the mp3. Now I am stuck because I can't really seem to wrap my head around how the PCM data translates to relative volume. I'm also aware of other complicating factors like multiple channels, big endian vs little, etc. Advice on how to map a group of pcm samples to relative volume would be key. Thanks!

    Read the article

  • osCommerce custom PHP page

    - by Afrosimon
    Hello! One of my client has an old osCommerce website and while working on it I have to implement what I would call "custom php page", i.e. a page which query a MySQL table, not related to osCommerce, and list the result. I'm not sure of the version, this trick I have seen a lot didn't gave me any result : http://www.clubosc.com/how-to-know-what-version-of-oscommerce-you-are-using.html . And I'm having a hard time doing this seemingly simple task, since osCommerce doesn't allow any php code in the page creation, and I didn't find any module giving me this possibility (not that it is easy to search in this mess : http://addons.oscommerce.com/). At this point I figured it would be easier to just hack'n slash through the code and come up with a custom page : I copied the index.php (the entry point in the application) : <?php require('includes/application_top.php'); if(!$smarty->is_cached($sContentPage, $sCachingGroup)) { //we switch on the content recognition require('includes/pages/' . $sContentClass . '.php'); } $smarty->display($sContentPage, $sCachingGroup); require(DIR_WS_INCLUDES . 'application_bottom.php'); ?> Here I gave a specific value to $sContentClass (with or without the if makes no difference) and customize the corresponding PHP file so it show my custom content but also initialize the same variable than those other PHP file in the pages/ folder. But alas, all of this curious and dubious code simply return me the home page. So here I am, is there an osCommerce Guru around here, or would anyone has a better idea (oh and I also posted on the osCommerce forum, but I'm still waiting for a response...)? Thanks a lot in advance.

    Read the article

  • Android - sendOrderedBroadcast help

    - by Donal Rafferty
    I am trying to use a sendOrderedBroadcast in my Android app. I want to be able to send the Intent from one of my applications to another and I then want to get data back from the Application that recieves the Intent, in this case a boolean true or false. Here is the current code: Intent i = new Intent(); i.setAction(GlobalData.PROPOSE_IN_DOMAIN_ROAM_INTENT); i.putExtra("com.testnetworks.QCLEVEL", aProposedTheoreticalQoSLevel); sendOrderedBroadcast(i, null, null, null, Activity.RESULT_OK, null, null); Is this the correct way to achieve what I want? If so what do I use as the resultReceiver* parameter? (3rd parameter) And then how to I recieve data back from the Broadcast? I have done a quick google and not come up with any examples, any help or examples greatly appreciated. UPDATED CODE: sendOrderedBroadcast(i, null, domainBroadcast, null, Activity.RESULT_OK, null, null); class DomainBroadcast extends BroadcastReceiver{ @Override public void onReceive(Context arg0, Intent intent) { String action = intent.getAction(); if(GlobalData.PROPOSE_IN_DOMAIN_ROAM_INTENT.equals(action)){ Log.d("BROADCAST", "Returning broadcast"); Bundle b = intent.getExtras(); Log.d("BROADCAST", "Returning broadcast " + b.getInt("com.testnetworks.INT_TEST")); } } @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if(GlobalData.PROPOSE_IN_DOMAIN_ROAM_INTENT.equals(action)){ Bundle b = intent.getExtras(); int testQCLevel = b.getInt("com.testnetworks.QCLEVEL"); switch(testQCLevel){ case 1: Log.d("QCLevel ", "QCLevel = UNAVAILABLE"); break; case 2: Log.d("QCLevel ", "QCLevel = BELOWUSABILITY"); break; case 3: Log.d("QCLevel ", "QCLevel = VOICE"); break; } intent.putExtra("com.testnetworks.INT_TEST", 100); } So according to the Doc's I should recieve 100 back in my DomainBroadcast reciever but it always comes back as 0. Can anyone see why? *resultReceiver - Your own BroadcastReceiver to treat as the final receiver of the broadcast.

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Could you please provide me with comments on a Java game of mine?

    - by Peter Perhác
    Hello there. I have marked this question as community wiki, so no rep points are thrown around. I made this game, Forest Defender, a proof-of-feasibility little project, which I would like to share with you and collect your constructive comments, first impressions, etc. It is a the first playable (and enjoyable) game I have released to the public, so I am, naturally, very eager to get some recognition by you, as my peers. I read in a StackOverflow blog, that One of the major reasons we created Stack Overflow to give every programmer a chance to be recognized by their peers. Recognized for their knowledge, their passion, [...] It comes in the form of a Java applet, I used an animation framework called PulpCore and I must say that it's been extremely enjoyable to work with it. I do recommend to people interested in Java game development. Since the product is free, fun, entirely commercial-free and I am willing to share the code to it (on request), I thought it would be OK to post this as a topic here. Moderators, please feel free to move this to another place if you deem the other place more appropriate. EDIT: I am ever so stupid to forget to include a link :-) http://www.perhac.com/shared/forest-defender/index.html

    Read the article

  • I've made something that might be useful to the community. Now what?

    - by Chris McCall
    If the specifics are important, I made a cruisecontrol.net publisher plugin that notifies a series of phone numbers via voice, announcing the current state of the build. It uses Twilio to do so. I'd like to avoid getting hung up on the specifics of what it is I've made, as I have this question a lot, with a number of little hobby one-offs. What's the state of the art as far as making my hobby output available to the world at large? There seem to be a lot of options for open-source project hosting, community features, and what role to take in all of this. It's a little bewildering. What I'm looking for is to put this out into the wild for free and basically take a hands-off approach from there. Is that realistic? Which project hosting service can I use for free to allow developers to at least download the code, report issues and collaborate with each other to improve the product? What snags have you run into that could make me regret this decision? I'm interested in war stories, advice and guidance on making this little product available to the community where it can be used.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >