Search Results

Search found 7955 results on 319 pages for 'signal processing'.

Page 22/319 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Bad HD video deinterlacing processing

    - by Guy Fawkes
    I have Ubuntu 12.04 32-bit with Unity. My system configuration is: CPU: Core 2 Quad Q6600 (2.4 GHz) RAM: 8192 Mb DDR2 Kingston Video: Palit GeForce GTX 260 216 SP, and my screen resolution is 1680x1050. I also have Window 7 Ulitimate installed, and I can see the same files in Media Player Classic without any horizontal lines. I've installed vdpau driver, NVIDIA drivers 304.51, and MPlayer 2 (within SMPlayer). I've disabled "Sync to VBlank" option in CCSM (because in other way, by default, MPlayer process use about 50-60 percents of my processor load), tried to swich between different deinterlace options in SMPlayer, used "-vc ffh264vdpau,ffmpeg12vdpau" (without quotes) parameters for MPlayer, switched to "Ubuntu 2D", but, finally, have no results. Any suggestions? How must I to set up MPlayer?

    Read the article

  • Software that can identify and remove foreground objects from image

    - by Antonio2011a
    I am interested in removing extraneous objects from an image. More specifically the situation is that there is a series of images of a particular background. In each image there are objects in the foreground, however these objects differ across the series of images in terms of their location. Note that some objects always exists in the foreground. The background is static. An example might be a busy tourist spot and you want to remove the people or tourist buses from the image. I'd like the software to take the series of images and as much as possible reconstruct the background. Is there software available that has this capability? If so what are the steps necessary to use that functionality? Similarly if anyone knows a lot about image processing, are there any image processing algorithms that could handle this? Thanks.

    Read the article

  • Prevent Linux from processing incoming ICMP Host unreachable packets

    - by bbc
    I have a test setup with one host on a network (10.1.0.0/16) talking via TCP to another one on another network (10.2.0.0/16) and a gateway in the middle. Sometimes, the TCP connection is lost and while scanning the trace (pcap), I looks like it's because of just one ICMP Host unreachable message sent by the gateway to 10.1.0.1 at some point. 10.1.0.1 then sends a TCP RST to 10.2.0.1. In my opinion, the gateway (pfSense) is broken or not configured correctly but anyway, for testing purposes, I'd like to block this kind of ICMP on the host (10.1.0.1) before it has an influence on my TCP connection (or does it? I'm not even sure). I've tried iptables: iptables -I INPUT -i eth0 -p icmp --icmp-type host-unreachable -j DROP but while it does a good job at preventing userpace applications like ping from receiving these ICMP messages, my TCP connection still comes to an end when the alleged "killer ICMP packet" is sent by the gateway. Am I right about how it is processed? If yes, then what can I do to achieve my goal?

    Read the article

  • Receiving and processing SMS messages through a script?

    - by ShankarG
    I am attempting to setup a system to receive and process SMS messages automatically. The system is intended for use in a context (an unfunded migrant workers' union in India) where both finances and sysadmin skills are extremely constrained (I would be the only person, in the near future, who would be administering the system). The intention is to make some functions - registration of members, generation of ID cards, communication of alerts and other information - easier. However, for receiving and sending SMS, I have not been able to find any email to SMS or other kind of gateway that functions in India. Perhaps there is one (edit: apparently Clickatell does have an India service, but the prices appear astronomical). If not, can one rely on a USB mobile modem (such as those provided by many mobile providers in India)? It seems like, with utilities such as gammu or bitpim, SMS operations on such a modem could be scripted. Is this actually feasible, though? Thanks in advance for your thoughts and suggestions. edit: Original first question removed since the two questions had little to do with each other. The original first question has been asked separately here

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • How to display custom processing message in JQuery datatables

    - by Sukhi
    i am using datatables api to display data in my asp.net4.0 application; datatables I have one column [ Delete ] to delete the row data.when i click on this link i send a jquery ajax request to delete the row from database. I want to display a message such as [ Deleting record... ] to the end user until data deleted by server side processing. I put a div on my page and write a message [ Deleting record... ] in a div when i click on delete link i display that message but when delete operation complete it also display a message [ Processing... ](which is inbuilt message of datatables) which looks like odd as two message are displaying. What can i do better to display message to the end user. JSCode $('#tblVideoList .delete').live('click', function (e) { e.preventDefault(); var oTable = $('#tblVideoList').dataTable(); var aPos = oTable.fnGetPosition(this.parentNode); var aData = oTable.fnGetData(aPos[0]); if (confirm('Are you sure want to delete the record.')) { $("#divDelete").show(); var today = new Date(); $.ajax({ type: "GET", cache: false, url: "samplepage.aspx", success: function (msg) { $("#divDelete").hide(); oTable.fnDraw(); } }); } return false; }); Thanks

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Disable MSBuild output of "Processing /ORDER options..."

    - by Jippers
    The output file from our project build has gone from 6MB to over 75MB in text. Diff'ing the last good build and the first time it blew up, there's a section in the output file like this in the latest: Processing /ORDER options External code objects not listed in the /ORDER file: ?onCallDisconnected@CallStateConnected@CallImpl@space@@UAEXV?$shared_ptr@VCallImpl@space@@@boost@@V?$shared_ptr@VGenericCall@space@@@5@K@Z ; framework.lib(CallStates.obj) ??_DBoolSetting@space@@QAEXXZ ; framework.lib(SettingValueImpl.obj) ...... continues for ~50MB ??$?0U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@?$allocator@U_Node@?$_Tree_nod@V?$_Tmap_traits@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@JU?$less@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@2@$0A@@std@@@std@@@std@@QAE@ABV?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@1@@Z ; CallStatistics.obj Finished processing /ORDER options I'm not sure how this got in there, but anyone know how to turn it off?

    Read the article

  • Batch processing JDBC

    - by Wai Hein
    I am practicing JDBC batch processing and having errors: error 1: Unsupported feature error 2: Execute cannot be empty or null Property files include: itemsdao.updateBookName = Update Books set bookname = ? where books.id = ? itemsdao.updateAuthorName = Update books set authorname = ? where books.id = ? I know I can execute about DML statements in one update, but I am practicing batch processing in JDBC. Below is my method public void update(Item item) { String query = null; try { connection = DbConnector.getConnection(); property = SqlPropertiesLoader.getProperties("dml.properties"); connection.setAutoCommit(false); if ( property == null ) { Logging.log.debug("dml.properties does not exist. Check property loader or file name is spelled right"); return; } query = property.getProperty("itemsdao.updateBookName"); statement = connection.prepareStatement(query); statement.setString(1, item.getBookName()); statement.setInt(2, item.getId()); statement.addBatch(query); query = property.getProperty("itemsdao.updateAuthorName"); statement = connection.prepareStatement(query); statement.setString(1, item.getAuthorName()); statement.setInt(2, item.getId()); statement.addBatch(query); statement.executeBatch(); connection.commit(); }catch (ClassNotFoundException e) { Logging.log.error("Connection class does not exist", e); } catch (SQLException e) { Logging.log.error("Violating PK constraint",e); } //helper class th finally { DbUtil.close(connection); DbUtil.closePreparedStatement(statement); }

    Read the article

  • Groovy and XML: Not able to insert processing instruction

    - by rhellem
    Scenario Need to update some attributes in an existing XML-file. The file contains a XSL processing instruction, so when the XML is parsed and updated I need to add the instruction before writing it to a file again. Problem is - whatever I do - I'm not able to insert the processing instruction Based on the Java-example found at rgagnon.com I have created the code below Example code ## import groovy.xml.* def xml = '''|<something> | <Settings> | </Settings> |</something>'''.stripMargin() def document = DOMBuilder.parse( new StringReader( xml ) ) def pi = document.createProcessingInstruction('xml-stylesheet', 'type="text/xsl" href="Bp8DefaultView.xsl"'); document.insertBefore(pi, document.documentElement) println document.documentElement Creates output <?xml version="1.0" encoding="UTF-8"?> <something> <Settings> </Settings> </something> What I want <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="Bp8DefaultView.xsl"?> <something> <Settings> </Settings> </something>

    Read the article

  • Assistance with CC Processing script

    - by JM4
    I am currently implementing a credit card processing script, most as provided by the merchant gateway. The code calls functions within a class and returns a string based on the response. The end php code I am using (details removed of course) with example information is: <?php $gw = new gwapi; $gw->setLogin("username", "password"); $gw->setBilling("John","Smith","Acme, Inc.","888","Suite 200", "Beverly Hills", "CA","77777","US","555-555-5555","555-555-5556","[email protected]", "www.example.com"); // "CA","90210","US","[email protected]"); $gw->setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10"); $r = $gw->doSale("1.00","4111111111111111","1010"); print $gw->responses['responsetext']; ?> where setlogin allows me to login, setbilling takes the sample consumer information, set order takes the order id and description, dosale takes the amount charged, cc number and exp date. when all the variables are sent validated then sent off for processing, a string is returned in the following format: response=1&responsetext=SUCCESS&authcode=123456&transactionid=23456&avsresponse=M&orderid=&type=sale&response_code=100 where: response = transaction approved or declined response text = textual response authcode = transaction authorization code transactionid = payment gateway tran id avsresponse = avs response code orderid = original order id passed in tran request response_code = numeric mapping of processor response I am trying to solve for the following: How do I take the data which is passed back and display it appropriately on the page - If the transaction failed or AVS code doesnt match my liking or something is wrong, an error is displayed to the consumer; if the transaction processed, they are taken to a completion page and the transaction id is sent in SESSION as output to the consumer If the response_code value matches a table of values, certain actions are taken, i.e. if code =100, take to success page, if code = 300 print specific error on original page to customer, etc.

    Read the article

  • Can I ignore a SIGFPE resulting from division by zero?

    - by Mikeage
    I have a program which deliberately performs a divide by zero (and stores the result in a volatile variable) in order to halt in certain circumstances. However, I'd like to be able to disable this halting, without changing the macro that performs the division by zero. Is there any way to ignore it? I've tried using #include <signal.h> ... int main(void) { signal(SIGFPE, SIG_IGN); ... } but it still dies with the message "Floating point exception (core dumped)". I don't actually use the value, so I don't really care what's assigned to the variable; 0, random, undefined... EDIT: I know this is not the most portable, but it's intended for an embedded device which runs on many different OSes. The default halt action is to divide by zero; other platforms require different tricks to force a watchdog induced reboot (such as an infinite loop with interrupts disabled). For a PC (linux) test environment, I wanted to disable the halt on division by zero without relying on things like assert.

    Read the article

  • Externally disabling signals for a Linux program.

    - by Harry
    Hello, On Linux, is it possible to somehow disable signaling for programs externally... that is, without modifying their source code? Context: I'm calling a C (and also a Java) program from within a bash script on Linux. I don't want any interruptions for my bash script, and for the other programs that the script launches (as foreground processes). While I can use a... trap '' INT ... in my bash script to disable the Ctrl C signal, this works only when the program control happens to be in the bash code. That is, if I press Ctrl C while the C program is running, the C program gets interrupted and it exits! This C program is doing some critical operation because of which I don't want it be interrupted. I don't have access to the source code of this C program, so signal handling inside the C program is out of question. #!/bin/bash trap 'echo You pressed Ctrl C' INT # A C program to emulate a real-world, long-running program, # which I don't want to be interrupted, and for which I # don't have the source code! # # File: y.c # To build: gcc -o y y.c # # #include <stdio.h> # int main(int argc, char *argv[]) { # printf("Performing a critical operation...\n"); # for(;;); // Do nothing forever. # printf("Performing a critical operation... done.\n"); # } ./y Regards, /HS

    Read the article

  • Eclipse Error: Processing Java changes since last activation

    - by Sean Ochoa
    I'm running Ubuntu 10.04 and I'm getting this error on startup of Eclipse: An internal error occurred during: Processing Java changes since last activation org.eclipse.core.resources.IWorkspace.addSaveParticipant(Ljava/lang/String;Lorg/eclipse/core/resources/ISaveParticipant;)Lorg/eclipse/core/resources/ISavedState; I pasted my full eclipse configuration here: http://pastebin.com/NtzN0HRG. And, here's a basic synopsis of what I have installed so far: EPIC (for Perl), Aptana (for web), Subversion connectors (with JavaHL), and PyDev. Any ideas?

    Read the article

  • Paypal Sandbox Do Direct Payment Internal Error 10001 Timeout Processing Request

    - by user552968
    This is what I'm sending to https://api-3t.sandbox.paypal.com/nvp: VERSION = 65.0 SIGNATURE = AFcWxV21C7fd0v3bYYYRCpSSRl31AxdW2pQp.tWHTjGNcHflR-LJhJ0t USER = seller_1283487740_biz_api1.gmail.com PWD = 1283487748 AMOUNT = 50.00 CREDITCARDTYPE = Visa ACCT = 4031477440127509 EXPDATE = 12/2015 CVV2 =123 IPADDRESS = 127.0.0.1 METHOD = DoDirectPayment I can GetBalance, I can produce other errors when I intentionally send something wrong, but DoDirectPayment or DoAuthorization returns this: TIMESTAMP = 2010-12-24T03:35:10Z CORRELATIONID = 2ca329fdbe3c0 ACK = Failure L_ERRORCODE0 = 10001 L_SHORTMESSAGE0 = Internal Error L_LONGMESSAGE0 = Timeout processing request

    Read the article

  • Modern Batch Processing in Linux

    - by Castro
    What tools, languages, and infrastructure do you use for do batch processing in Linux? I am looking for something that facilitate the tasks of: Process files Log Validation Job Controlling (start,strop,reestart a process) Mysql Connection Thanks for any help!

    Read the article

  • Large data processing in x86 C# gives System.OutOfMemory exception

    - by Cool
    I am processing XML coming from server which contains both images and data in one C# function (compiled 32 bit). When I try to parse this xml in memory it gives me System.OutOfMemory exception. Is there any way to avoid this error? My guess is, system cannot find contiguous block of 50-100MB memory. (my pc hv 8Gig ram and its quad core)

    Read the article

  • audio processing in iPhone

    - by Janaka
    I am writing an iPhone application to apply filters to audio input and output the result in real time. I am new to audio processing but using audiounit, the correct approach? I found out how to output data using audiounit but couldn’t figure out how to capture input audio. Is there a sample application showing how to connect input and output using audiounit?

    Read the article

  • Data Processing in Business Intelligence Applications

    - by Manoj
    Is data processing a part of business intelligence tools? If no how else is it done or if yes can you give a few examples. My use case: I am using pentaho reporting to generate tables and graphs of my test data. Now I want to compare the data against the spec and see if it passes or fails and report the same. The spec is also in the database. How best to do it?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >