Search Results

Search found 7955 results on 319 pages for 'signal processing'.

Page 155/319 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Struts:JSON:return multiple objects

    - by cp
    Hello Is it possible to return multiple JSON objects in the request header with Struts1? I am presently returning a single JSON objects, however the need now is to return a second data structure. All the client-side processing works perfectly for the single data structure in the single JSON objects, I really do not want to complicate it by putting two hetrogenous data structures in a single return JSON object. tia.

    Read the article

  • Does it make sense to have several UDP ports ready? Will packets be dropped?

    - by Gubatron
    I'm coding a networking application on Android. I'm thinking of having a single UDP port and Datagram socket that receives all the datagrams that are sent to it and then have different processing queues for these messages. I'm doubting if I should have a second or third UDP socket on standby. Some messages will be very short (100bytes or so), but others will have to transfer files. My concern is, will the Android kernel drop the small messages if it's too busy handling the bigger ones?

    Read the article

  • Creating a Web Service to automatically get information

    - by Sean P
    I want to create some sort of method of creating a web service that will run automatically and run DB queries and some API calls which will then store data that I can use/call without taking the processing or time penalty of doing it every time a user access my web service. Is this possible? If so, point me in the right direction on how to implement something like this Using vb.net and ASP.net Thanks in advance!!

    Read the article

  • Is it possible to render a template from middleware?

    - by pajton
    I have a middleware that does some processing. On certain conditions it raises an exception and the user sees my 500.html template - correctly responding to 500 http status. Now, on some exceptions I would like to render different template than default 500.html. Is it possible/how to achieve that?

    Read the article

  • Recommendations for Open Source Parallel programming IDE

    - by Andrew Bolster
    What are the best IDE's / IDE plugins / Tools, etc for programming with CUDA / MPI etc? I've been working in these frameworks for a short while but feel like the IDE could be doing more heavy lifting in terms of scaling and job processing interactions. (I usually use Eclipse or Netbeans, and usually in C/C++ with occasional Java, and its a vague question but I can't think of any more specific way to put it)

    Read the article

  • find and replace tokens in javascript

    - by Sourabh
    Hello, I have to do something like this string = " this is a good example to show" search = array {this,good,show} find and replace them with a token like string = " {1} is a {2} example to {3}" (order is intact) the string will undergo some processing and then string = " {1} is a {2} numbers to {3}" (order is intact) tokens are again replaced back to the string likem so that the string becomes string = " this is a good number to show" How should it be implemented so that the process is done at high performance ? Thanks in advance.

    Read the article

  • Tracking down data load performance issues in SSIS package

    - by SteveC
    Are there any ways to determine what the differences in databases are that affect a SSIS package load performance ? I've got a package which loads and does various bits of processing on ~100k records on my laptop database in about 5 minutes Try the same package and same data on the test server, which is a reasonable box in both CPU and memory, and it's still running ... about 1 hour so far :-( Checked the package with a small set of data, and it ran through Ok

    Read the article

  • Proper handling of confirmation page in PHP

    - by wnoveno
    Hi, I have a sign up page written in php, that goes to a confirmation page after processing. In the confirmation/ thank you page there is a 3rd party script that counts the number of successful registrants. how would I limit the script to be loaded per successful registration?

    Read the article

  • if else within CTE ?

    - by stackoverflowuser
    I want to execute select statement within CTE based on a codition. something like below ;with CTE_AorB ( if(condition) select * from table_A else select * from table_B ), CTE_C as ( select * from CTE_AorB // processing is removed ) But i get error on this. Is it possible to have if else within CTEs? If not is there a work around Or a better approach. Thanks.

    Read the article

  • Improve long mysql query

    - by John Adawan
    I have a php mysql query like this $query = "SELECT * FROM articles FORCE INDEX (articleindex) WHERE category='$thiscat' and did>'$thisdid' and mid!='$thismid' and status='1' and group='$thisgroup' and pid>'$thispid' LIMIT 10"; As optimization, I've indexed all the parameters in articleindex and I use force index to force mysql to use the index, supposedly for faster processing. But it seems that this query is still quite slow and it's causing a jam and maxing out the max mysql connection limit. Let's discuss how we can improve on such long query.

    Read the article

  • Improve long mysql query

    - by John Adawan
    I have a php mysql query like this $query = "SELECT * FROM articles FORCE INDEX (articleindex) WHERE category='$thiscat' and did>'$thisdid' and mid!='$thismid' and status='1' and group='$thisgroup' and pid>'$thispid' LIMIT 10"; As optimization, I've indexed all the parameters in articleindex and I use force index to force mysql to use the index, supposedly for faster processing. But it seems that this query is still quite slow and it's causing a jam and maxing out the max mysql connection limit. Let's discuss how we can improve on such long query.

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • scraping blog contents

    - by goh
    Hi lads, After obtaining the urls for various blogspots, tumblr and wordpress pages, I faced some problems processing the html pages. The thing is, i wish to distinguish between the content,title and date for each blog post. I might be able to get the date through regex, but there are so many custom scripts people are using now that the html classes and structure is so different. Does anyone has a solution that may help?

    Read the article

  • Explanation of the different functionality in Verifone VMAC versions?

    - by bazily
    I'm looking for an explanation of the different functionality in versions of a application called VMAC (Verix blah blah blah), also called "comm server", which is used on Verifone payment terminals. I've got terminals with versions 1.7 and 3.3 of VMAC, and I'm unaware of the differences. If someone is a Verifone expert, it would be helpful to know how much of the communication with the processing host vs the merchant services provider's application.

    Read the article

  • Tools to convert XML to HTML using XSLT

    - by armannvg
    I'm beginning to work on a project which has some extensive XML XSLT processing to render output HTML. Some changes need to be made to the XSLT and I need some tool that can help me modify it without having to run the solution every time. Something that can help me visualize the changes I'm making to the rendered HTML. I've found StylusStudio but I preferably would want a freeware that I could use

    Read the article

  • How to check whether given file is in PROPER word file format?

    - by shekhar
    Hi, I am developing one application using C# for processing MSWord files. My application gets hang when I pass invalid .doc file as an input. For example, if I have one foo.pdf file and I pass it to my application after changing its extension (foo.doc). Is it possible to check whether file is valid doc file before trying to open it? Please enlighten !!!! Thanks in advance

    Read the article

  • Problem in java.util.Set.addAll() method

    - by Yatendra Goel
    I have a java.util.Set<City> cities and I need to add cities to this set in 2 ways: By adding individual city (with the help of cities.add(city) method call) By adding another set of cities to this set (with the help of cities.addAll(anotherCitiesSet) method call) But the problem in second approach is that i don't know whether there were any duplicate cities in the anotherCitiesSet. I want to do some processing whenever a duplicate entry is tried to be entered in thecities set.

    Read the article

  • Does Meta Refresh work during page load?

    - by waquin
    Page A has a meta refresh to redirect to another page, C, after a certain amount of time (time T). From page A a link is clicked that takes a long time to load, longer than time T, and would eventually load another page; B. Will the meta refresh on page A cause the page to be re-directed to C, or will the processing of the link override the meta refresh, eventually loading page B?

    Read the article

  • ideas: per-file authentication in order to download

    - by suIIIha
    i would love to use mod_xsendfile but i live in a shared environment which does not provide such a module. processing large files such as videos through a server-side script and sending it to the browser that way seems to be unacceptable in my case, so i am looking for a way to enable per-file authentication in such a way that is not going to consume resources much. nobody shall know what the actual path is to the file they are downloading. please suggest how to do that.

    Read the article

  • The internal implementation of R's dataset

    - by Yin Zhu
    I am trying to build a data processing program. Currently I use a double matrix to represent the data table, each row is an instance, each column represents a feature. I also have an extra vector as the target value for each instance, it is of double type for regression, it is of integer for classification. I want to make it more general. I am wondering how what kind of structure R uses to store a dataset, i.e. the internal implementation in R.

    Read the article

  • Dump to CSV/Postgres memory

    - by alex
    I have a large table (300 million lines) that I would like to dump to a csv - I need to do some processing that cannot be done with SQL. Right now I am using Squirrel as a client, and it does not apparently deal very well with large datasets - at least as far as I can tell from my own (limited) experience. If I run the query on the actual host, will it use less memory? Thanks for any help.

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >