Search Results

Search found 7538 results on 302 pages for 'parallel processing'.

Page 16/302 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Modular Architecture for Processing Pipeline

    - by anjruu
    I am trying to design the architecture of a system that I will be implementing in C++, and I was wondering if people could think of a good approach, or critique the approach that I have designed so far. First of all, the general problem is an image processing pipeline. It contains several stages, and the goal is to design a highly modular solution, so that any of the stages can be easily swapped out and replaced with a piece of custom code (so that the user can have a speed increase if s/he knows that a certain stage is constrained in a certain way in his or her problem). The current thinking is something like this: struct output; /*Contains the output values from the pipeline.*/ class input_routines{ public: virtual foo stage1(...){...} virtual bar stage2(...){...} virtual qux stage3(...){...} ... } output pipeline(input_routines stages); This would allow people to subclass input_routines and override whichever stage they wanted. That said, I've worked in systems like this before, and I find the subclassing and the default stuff tends to get messy, and can be difficult to use, so I'm not giddy about writing one myself. I was also thinking about a more STLish approach, where the different stages (there are 6 or 7) would be defaulted template parameters. Can anyone offer a critique of the pattern above, thoughts on the template approach, or any other architecture that comes to mind?

    Read the article

  • What are some alternatives to word processing with Markdown?

    - by Hassan
    I've used MS Word-style editors for a long time, but I never got used to how unintuitive and cumbersome they are. I'm not talking specifically about MS Word, but also other editors that seem to mimic Word, like OpenOffice, NeoOffice, etc. I've found myself preferring to write in Markdown (much like on this site). I've found a few good Markdown editors, and I like using them a lot more than using Word-style editors. Here is what they generally look like: As you can see, it works much differently than a Word-style editor. This is a generally cleaner way of writing, since formatting is done right in the text, and is extremely simple to use (no highlighting some text, then clicking a button in some menu you have to find). Although editing text this way is great, I've realized that the syntax can only be used for very specific needs (bullets, numbered lists, headings and sub-headings, bold, italic, and some other common ones). However, many features are missing. Here are some features that would be nice in a word processor: Tables. Indenting paragraphs. Good image support (you can link to images, but not add them, since Markdown is just text). More simple to use than Word and its cronies. Cross-platform. Some of these can be fixed with in-line HTML, but nobody wants to do that. It seems Markdown was designed for editing text on the internet. Is there a similar setup that works better for desktop word processors?

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • What Parallel computing APIs make good use of sockets?

    - by Ole Jak
    My program uses sockets, what Parallel computing APIs could I use that would help me without obligating me to go from sockets to anything else? When we are on a cluster with a special, non-socket infrastructure system this API would emulate something like sockets but using that infrastructure (so programs perform much faster than on sockets, but still use the sockets API).

    Read the article

  • How to use parallel execution in a shell script?

    - by eSKay
    I have a C shell script that does something like this: #!/bin/csh gcc example.c -o ex gcc combine.c -o combine ex file1 r1 <-- 1 ex file2 r2 <-- 2 ex file3 r3 <-- 3 #... many more like the above combine r1 r2 r3 final \rm r1 r2 r3 Is there some way I can make lines 1, 2 and 3 run in parallel instead of one after the another?

    Read the article

  • Possible to distribute or parallel process a sequential program?

    - by damigu
    In C++, I've written a mathematical program (for diffusion limited aggregation) where each new point calculated is dependent on all of the preceding points. Is it possible to have such a program work in a parallel or distributed manner to increase computing speed? If so, what type of modifications to the code would I need to look into?

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • apt-get is broken

    - by Amol Shinde
    I Cannot install any package in the server, As I am newbie in Server. In Morning I found that some, I am not able to install any package from command line in the server,Now every package is now manually downloaded packages and then installed in the server. Can any one Please tell me what is the issue and how could it be resolved. OS:- Ubuntu 10.04.4 LTS \n \l (64 Bit) Below is the error: iam@ubuntu$ sudo apt-get install pidgin Reading package lists... Done Building dependency tree Reading state information... Done pidgin is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 102 not upgraded. 32 not fully installed or removed. After this operation, 0B of additional disk space will be used. Traceback (most recent call last): File "/usr/bin/apt-listchanges", line 33, in <module> from ALChacks import * File "/usr/share/apt-listchanges/ALChacks.py", line 32, in <module> sys.stderr.write(_("Can't set locale; make sure $LC_* and $LANG are correct!\n")) NameError: name '_' is not defined perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_IN" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up shared-mime-info (0.71-1ubuntu2) ... /var/lib/dpkg/info/shared-mime-info.postinst: line 13: 21935 Segmentation fault update-mime-database.real /usr/share/mime dpkg: error processing shared-mime-info (--configure): subprocess installed post-installation script returned error exit status 139 dpkg: dependency problems prevent configuration of libgtk2.0-0: libgtk2.0-0 depends on shared-mime-info; however: Package shared-mime-info is not configured yet. dpkg: error processing libgtk2.0-0 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of chromium-browser: chromium-browser depends on libgtk2.0-0 (>= 2.20.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing chromium-browser (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of chromium-codecs-ffmpeg: chromium-codecs-ffmpeg depends on chromium-browser (>= 4.0.203.0~); however: Package chromium-browser is not configured yet. dpkg: error processing chromium-codecs-ffmpeg (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of chromium-browser-l10n: chromium-browser-l10n depends on chromium-browser (= 18.0.1025.151~r130497-0ubuntu0.10.04.No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already 1); however: Package chromium-browser is not configured yet. dpkg: error processing chromium-browser-l10n (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libevdocument2: libevdocument2 depends on libgtk2.0-0 (>= 2.14.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libevdocument2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libevview2: libevview2 depends on libevdocument2 (>= 2.29.5); however: Package libevdocument2 is not configured yet. libevview2 depends on libgtk2.0-0 (>= 2.20.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libevview2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of evince: evince depends on libevdocument2 (>= 2.29.5); however: Package libevdocument2 is not configured yet. evince depends on libevview2 (>= 2.29.No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already 5); however: Package libevview2 is not configured yet. evince depends on libgtk2.0-0 (>= 2.16.0); however: Package libgtk2.0-0 is not configured yet. evince depends on shared-mime-info; however: Package shared-mime-info is not configured yet. dpkg: error processing evince (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of firefox: firefox depends on libgtk2.0-0 (>= 2.20.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing firefox (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gcalctool: gcalctool depends on libgtk2.0-0 (>= 2.18.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing gcalctool (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgdict-1.0-6: libgdict-1.0-6 depends on libgtk2.0-0 (>= 2.18.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libgdict-1.0-6 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-utils: gnome-utils depends on libgdict-1.0-6 (>= 2.23.90); however: Package libgdict-1.0-6 is not configured yet. gnome-utils depends on libgtk2.0-0 (>= 2.18.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing gnome-utils (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gtk2-engines-pixbuf: gtk2-engines-pixbuf depends on gtk2.0-binver-2.10.0; however: Package gtk2.0-binver-2.10.0 is not installed. Package libgtk2.0-0 which provides gtk2.0-binver-2.10.0 is not configured yet. gtk2-engines-pixbuf depends on libgtk2.0-0 (= 2.20.1-0ubuntu2.1); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing gtk2-engines-pixbuf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libedataserverui1.2-8: libedataserverui1.2-8 depends on libgtk2.0-0 (>= 2.14.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libedataserverui1.2-8 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgail18: libgail18 depends on libgtk2.0-0 (= 2.20.1-0ubuntu2.1); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libgail18 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgtk2.0-bin: libgtk2.0-bin depends on libgtk2.0-0 (>= 2.20.1-0ubuntu2.1); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libgtk2.0-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgtk2.0-dev: libgtk2.0-dev depends on libgtk2.0-0 (= 2.20.1-0ubuntu2.1); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing libgtk2.0-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libnotify-dev: libnotify-dev depends on libgtk2.0-dev (>= 2.10); however: Package libgtk2.0-dev is not configured yet. dpkg: error processing libnotify-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of network-manager-gnome: network-manager-gnome depends on libgtk2.0-0 (>= 2.16.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing network-manager-gnome (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openoffice.org-core: openoffice.org-core depends on libgtk2.0-0 (>= 2.10); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing openoffice.org-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openoffice.org-draw: openoffice.org-draw depends on openoffice.org-core (= 1:3.2.0-7ubuntu4.4); however: Package openoffice.org-core is not configured yet. dpkg: error processing openoffice.org-draw (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openoffice.org-impress: openoffice.org-impress depends on openoffice.org-core (= 1:3.2.0-7ubuntu4.4); however: Package openoffice.org-core is not configured yet. openoffice.org-impress depends on openoffice.org-draw (= 1:3.2.0-7ubuntu4.4); however: Package openoffice.org-draw is not configured yet. dpkg: error processing openoffice.org-impress (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of pidgin: pidgin depends on libgtk2.0-0 (>= 2.18.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing pidgin (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Setting up update-manager (1:0.134.12.1) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory dpkg: error processing update-manager (--configure): subprocess installed post-installation script returned error exit status 245 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of update-notifier: update-notifier depends on libgtk2.0-0 (>= 2.14.0); however: Package libgtk2.0-0 is not configured yet. update-notifier depends on update-manager; however: Package update-manager is not configured yet. dpkg: error processing update-notifier (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xulrunner-1.9.2: xulrunner-1.9.2 depends on libgtk2.0-0 (>= 2.18.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing xulrunner-1.9.2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xulrunner-1.9.2-dev: xulrunner-1.9.2-dev depends on xulrunner-1.9.2 (= 1.9.2.28+build1+nobinonly-0ubuntu0.10.04.1); however: Package xulrunner-1.9.2 is not configured yet. xulrunner-1.9.2-dev depends on libnotify-dev; however: Package libnotify-dev is not configured yet. dpkg: error processing xulrunner-1.9.2-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of icedtea6-plugin: icedtea6-plugin depends on xulrunner-1.9.2; however: Package xulrunner-1.9.2 is not configured yet. icedtea6-plugin depends on libgtk2.0-0 (>= 2.8.0); however: Package libgtk2.0-0 is not configured yet. dpkg: error processing icedtea6-plugin (--configure): dependency problems - leaving unconfigured Setting up libgweather-common (2.30.0-0ubuntu1.1) ... No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory dpkg: error processing libgweather-common (--configure): subprocess installed post-installation script returned error exit status 245 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libgweather1: libgweather1 depends on libgtk2.0-0 (>= 2.11.0); however: Package libgtk2.0-0 is not configured yet. libgweather1 depends on libgweather-common (>= 2.24.0); however: Package libgweather-common is not configured yet. dpkg: error processing libgweather1 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openoffice.org-style-galaxy: openoffice.org-style-galaxy depends on openoffice.org-core (>= 1:3.2.0~beta); however: Package openoffice.org-core is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing openoffice.org-style-galaxy (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openoffice.org-common: openoffice.org-common depends on openoffice.org-style-default | openoffice.org-style; however: Package openoffice.org-style-default is not installed. Package openoffice.org-style-galaxy which provides openoffice.org-style-default is not configured yet. Package openoffice.org-style is not installed. Package openoffice.org-style-galaxy which provides openoffice.org-style is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing openoffice.org-common (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: shared-mime-info libgtk2.0-0 chromium-browser chromium-codecs-ffmpeg chromium-browser-l10n libevdocument2 libevview2 evince firefox gcalctool libgdict-1.0-6 gnome-utils gtk2-engines-pixbuf libedataserverui1.2-8 libgail18 libgtk2.0-bin libgtk2.0-dev libnotify-dev network-manager-gnome openoffice.org-core openoffice.org-draw openoffice.org-impress pidgin update-manager update-notifier xulrunner-1.9.2 xulrunner-1.9.2-dev icedtea6-plugin libgweather-common libgweather1 openoffice.org-style-galaxy openoffice.org-common E: Sub-process /usr/bin/dpkg returned an error code (1) While typing command in terminal, command is not auto-completing.

    Read the article

  • Parallel curve like algorithm for graphs

    - by skrat
    Is there a well know algorithm for calculating "parallel graph"? where by parallel graph I mean the same as parallel curve, vaguely called "offset curve", but with a graph instead of a curve. Given this picture how can I calculate points of black outlined polygons?

    Read the article

  • What is IE's Maximum Parallel Connection Accross All Hosts

    - by timeitquery
    Based on the IE documentation on MSDN IE 8 supports up to 6 parallel connections per server and IE 6,7 support 2. What is the upper limit of parallel connections accross all the hosts? So if I have 60 hosts, 8 requests per host, so 360 requests in the HTML page - does it mean that IE 8 will have 360 connection in parallel and IE 6 or 7 would have 120? (ignoring the html rendering time, and if call is blocking or not)

    Read the article

  • Designing status management for a file processing module

    - by bot
    The background One of the functionality of a product that I am currently working on is to process a set of compressed files ( containing XML files ) that will be made available at a fixed location periodically (local or remote location - doesn't really matter for now) and dump the contents of each XML file in a database. I have taken care of the design for a generic parsing module that should be able to accommodate the parsing of any file type as I have explained in my question linked below. There is no need to take a look at the following link to answer my question but it would definitely provide a better context to the problem Generic file parser design in Java using the Strategy pattern The Goal I want to be able to keep a track of the status of each XML file and the status of each compressed file containing the XML files. I can probably have different statuses defined for the XML files such as NEW, PROCESSING, LOADING, COMPLETE or FAILED. I can derive the status of a compressed file based on the status of the XML files within the compressed file. e.g status of the compressed file is COMPLETE if no XML file inside the compressed file is in a FAILED state or status of the compressed file is FAILED if the status of at-least one XML file inside the compressed file is FAILED. A possible solution The Model I need to maintain the status of each XML file and the compressed file. I will have to define some POJOs for holding the information about an XML file as shown below. Note that there is no need to store the status of a compressed file as the status of a compressed file can be derived from the status of its XML files. public class FileInformation { private String compressedFileName; private String xmlFileName; private long lastModifiedDate; private int status; public FileInformation(final String compressedFileName, final String xmlFileName, final long lastModified, final int status) { this.compressedFileName = compressedFileName; this.xmlFileName = xmlFileName; this.lastModifiedDate = lastModified; this.status = status; } } I can then have a class called StatusManager that aggregates a Map of FileInformation instances and provides me the status of a given file at any given time in the lifetime of the appliciation as shown below : public class StatusManager { private Map<String,FileInformation> processingMap = new HashMap<String,FileInformation>(); public void add(FileInformation fileInformation) { fileInformation.setStatus(0); // 0 will indicates that the file is in NEW state. 1 will indicate that the file is in process and so on.. processingMap.put(fileInformation.getXmlFileName(),fileInformation); } public void update(String filename,int status) { FileInformation fileInformation = processingMap.get(filename); fileInformation.setStatus(status); } } That takes care of the model for the sake of explanation. So whats my question? Edited after comments from Loki and answer from Eric : - I would like to know if there are any existing design patterns that I can refer to while coming up with a design. I would also like to know how I should go about designing the status management classes. I am more interested in understanding how I can model the status management classes. I am not interested in how other components are going to be updated about a change in status at the moment as suggested by Eric.

    Read the article

  • Draw a parallel line

    - by VOX
    I have x1,y1 and x2,y2 which forms a line segment. How can I get another line x3,y3 - x4,y4 which is parallel to the first line as in the picture. I can simply add n to x1 and x2 to get a parallel line but it is not what i wanted. I want the lines to be as parallel in the picture.

    Read the article

  • Parallel Assignment operator in Ruby

    - by Bragaadeesh
    Hi, I was going through an example from Programming in Ruby book. This is that example def fib_up_to(max) i1, i2 = 1, 1 # parallel assignment (i1 = 1 and i2 = 1) while i1 <= max yield i1 i1, i2 = i2, i1+i2 end end fib_up_to(100) {|f| print f, " " } The above program simply prints the fibonacci numbers upto 100. Thats fine. My question here is when i replace the parallel assignment with something like this, i1 = i2 i2 = i1+i2 I am not getting the desired output. My question here is, is it advisable to use parallel assignments? (I come from Java background and it feels really wierd to see this type of assignment) One more doubt is : Is parallel assignment an operator?? Thanks

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • iPhone post-processing with a single FBO with Opengl ES 2.0?

    - by Jing
    I am trying to implement post-processing (blur, bloom, etc.) on the iPhone using OpenGL ES 2.0. I am running into some issues. When rendering during my second rendering step, I end up drawing a completely black quad to the screen instead of the scene (it appears that the texture data is missing) so I am wondering if the cause is using a single FBO. Is it incorrect to use a single FBO in the following fashion? For the first pass (regular scene rendering), I attach a texture as COLOR_ATTACHMENT_0 and render to a texture. glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texturebuffer, 0) For the second pass (post-processing), I attach the color renderbuffer to COLOR_ATTACHMENT_0 glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer) Then use the texture from the first pass for rendering as a quad on the screen.

    Read the article

  • Can a Perl subroutine return data but keep processing?

    - by Perl QuestionAsker
    Is there any way to have a subroutine send data back while still processing? For instance (this example used simply to illustrate) - a subroutine reads a file. While it is reading through the file, if some condition is met, then "return" that line and keep processing. I know there are those that will answer - why would you want to do that? and why don't you just ...?, but I really would like to know if this is possible. Thank you so much in advance.

    Read the article

  • Parallel prologue and epilogue in Grid Engine

    - by ajdecon
    We have a cluster being used to run MPI jobs for a customer. Previously this cluster used Torque as the scheduler, but we are transitioning to Grid Engine 6.2u5 (for some other features). Unfortunately, we are having trouble duplicating some of our maintenance scripts in the Grid Engine environment. In Torque, we have a prologue.parallel script which is used to carry out an automated health-check on the node. If this script returns a fail condition, Torque will helpfully offline the node and re-queue the job to use a different group of nodes. In Grid Engine, however, the queue "prolog" only runs on the head node of the job. We can manually run our prologue script from the startmpi.sh initialization script, for the mpi parallel environment; but I can't figure out how to detect a fail condition and carry out the same "mark offline and requeue" procedure. Any suggestions?

    Read the article

  • Networked Parallel Port in Linux / KVM / QEMU

    - by korkman
    What I want to use is the "-parallel" tcp or udp option from KVM / QEMU, but I don't seem to find any server for this client. I don't serve a printer but a hardware dongle. I checked ser2net, which does provide "/dev/lp0" sharing, but it doesn't seem to work for KVM / QEMU. I suspect KVM / QEMU requires "/dev/parport0". I did rmmod lp, modprobe ppdev, linked ser2net to parport0, but it didn't work out. Perhaps ser2net is not suited for this. I tried socat as well, and I tried netcat. No success. Does anyone know any KVM / QEMU compatible parallel port server? Or did any of netcat, socat or ser2net work for you?

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • How does Task Parallel Library scale on a terminal server or in a web application?

    - by Lasse V. Karlsen
    I understand that the TPL uses work-stealing queues for its tasks when I execute things like Parallel.For and similar constructs. If I understand this correctly, the construct will spin up a number of tasks, where each will start processing items. If one of the tasks complete their allotted items, it will start stealing items from the other tasks which hasn't yet completed theirs. This solves the problem where items 1-100 are cheap to process and items 101-200 are costly, and one of the two tasks would just sit idle until the other completed. (I know this is a simplified exaplanation.) However, how will this scale on a terminal server or in a web application (assuming we use TPL in code that would run in the web app)? Can we risk saturating the CPUs with tasks just because there are N instances of our application running side by side? Is there any information on this topic that I should read? I've yet to find anything in particular, but that doesn't mean there is none.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >