Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 3/195 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Parallel port blocking

    - by asalamon74
    I have a legacy Java program which handles a special card printer by sending binary data to the LPT1 port (no printer driver is involved, the Java program creates the binary stream). The program was working correctly with the client's old computer. The Java program sent all the bytes to the printer and after sending the last byte the program was not blocked. It took an other minute to finish the card printing, but the user was able to continue the work with the program. After changing the client's computer (but not the printer, or the Java program), the program does not finish the task till the card is ready, it is blocked until the last second. It seems to me that LPT1 has a different behavior now than was before. Is it possible to change this in Windows? I've checked BIOS for parallel port settings: The parallel port is set to EPP+ECP (but also tried the other two options: Bidirectional, Output only). Maybe some kind of parallel port buffer is too small? How can I increase it?

    Read the article

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Parallel computing in .net

    - by HotTester
    Since the launch of .net 4.0 a new term that has got into lime light is parallel computing. Does parallel computing provide us some benefits or its just another concept or feature. Further is .net really going to utilize it in applications ? Further is parallel computing different from parallel programming ? Kindly throw some light on the issue in perspective of .net and some examples would be helpful. Thanks...

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Favorite Visual Studio 2010 Extensions, Update

    - by Scott Dorman
    With the release of the Visual Studio Pro Power Tools (and many other new extensions having been released), my list of favorite Visual Studio extensions has changed. All of these extensions are available in the Visual Studio Gallery. Here is the list of extensions that I currently have installed and find useful: Bing Start Page CodeCompare Collapse Selection In Solution Explorer Collapse Solution Color Picker Completion Extension Analyzer Find Results Highlighter Find Results Tweak (Available from CodePlex) Format Document HelpViewerKeywordIndex HighlightMultiWord Image Insertion Indentation Matcher Extension ItalicComments MoveToRegionVSX Numbered Bookmarks PowerCommands for Visual Studio 2010 Regular Expressions Margin Search Work Items for TFS 2010 Source Outliner Spell Checker Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl StyleCop Team Founder Server Power Tools TFS Auto Shelve Visual Studio Color Theme Editor Visual Studio Pro Power Tools VS10x Code Map VS10x Code Marker VS10x Collapse All Projects VS10x Editor View Enhancer VS10x Insert Debug Names VS10x Selection Popup VS10x Super Copy Paste VSCommands 2010 Word Wrap with Auto-Indent   Technorati Tags: Visual Studio,Extensions

    Read the article

  • Favorite Visual Studio 2010 Extensions

    - by Scott Dorman
    Now that Visual Studio 2010 has been released, there are a lot of extensions being written. In fact, as of today (May 1, 2010 at 15:40 UTC) there are 809 results for Visual Studio 2010 in the Visual Studio Gallery. If you filter this list to show just the free items, there are still 251 extensions available. Given that number (and it is currently increasing weekly) it can be difficult to find extensions that are useful. Here is the list of extensions that I currently have installed and find useful: Word Wrap with Auto-Indent Indentation Matcher Extension Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl Source Outliner Triple Click ItalicComments Go To Definition Spell Checker Remove and Sort Using Format Document Open Folder in Windows Explorer Find Results Highlighter Regular Expressions Margin Indention Matcher Extension Word Wrap with Auto-Indent VSCommands HelpViewerKeywordIndex StyleCop Visual Studio Color Theme Editor PowerCommands for Visual Studio 2010 Extension Analyzer CodeCompare Team Founder Server Power Tools VS10x Selection Popup Color Picker Completion Numbered Bookmarks   Technorati Tags: Visual Studio,Extensions

    Read the article

  • Which databases support parallel processing across multiple servers?

    - by David
    I need a database engine that can utilize multiple servers for processing a single SQL query in parallel. So far I know that this is possible with the some engines, though none of them are feasible for me either because of pricing or missing features. The engines currently known to me are: MS SQL (enterprise) DB2 (enterprise) Oracle (enterprise) GridSQL Greenplum Which other engines have this feature? Do you have any experience with using this feature? Edit: I have now proposed a method for creating one myself. Any input is welcome. Edit: I have found another one: Informix Extended Parallel Server

    Read the article

  • Parallel Port Problem in 12.04

    - by Frank Oberle
    I have a “dumb” printer attached to a parallel port in my machine which works fine under the “other” resident operating system (from Redmond) on the same machine. I recently added Ubuntu 12.04 as a dual boot on the machine, but Ubuntu doesn't seem to recognize the parallel port at all. All I need to set up a printer is a really plain-vanilla fixed pitch text-only generic driver, which is present, but no parallel ports show up. (The other printers, all on USB ports, seem to work just fine). Following what appeared to me to be the most reasonable of the many conflicting pieces of advice on the web, here's what I did: I added the following lines to /etc/modules parport_pc ppdev parport Then, after rebooting, I checked to see that the lines were still present, and they were. I ran dmesg | grep par and got the following references in the output that seemed like they might have to do with the parallel port: [ 14.169511] parport_pc 0000:03:07.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 14.169516] PCI parallel port detected: 9710:9805, I/O at 0xce00(0xcd00), IRQ 21 [ 14.169577] parport0: PC-style at 0xce00 (0xcd00), irq 21, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] [ 14.354254] lp0: using parport0 (interrupt-driven). [ 14.571358] ppdev: user-space parallel port driver [ 16.588304] type=1400 audit(1347226670.386:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=964 comm="apparmor_parser" [ 16.588756] type=1400 audit(1347226670.386:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=964 comm="apparmor_parser" [ 16.673679] type=1400 audit(1347226670.470:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1010 comm="apparmor_parser" [ 16.675252] type=1400 audit(1347226670.470:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1014 comm="apparmor_parser" [ 16.675716] type=1400 audit(1347226670.470:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1014 comm="apparmor_parser" [ 16.676636] type=1400 audit(1347226670.474:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1015 comm="apparmor_parser" [ 16.677124] type=1400 audit(1347226670.474:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1015 comm="apparmor_parser" [ 1545.725328] parport0: ppdev0 forgot to release port I have no idea what any of that means, but the line “parport0: ppdev0 forgot to release port ” seems unusual. I was still unable to add a printer for my old clunker, so I tried the direct approach, typing echo “Hello” > /dev/lp0 and received a Permission denied message. I then tried echo “Hello” > /dev/parport0 which didn't give me any message at all, but still didn't print anything. Running the command sudo /usr/lib/cups/backend/parallel gives the following: direct parallel:/dev/lp0 "unknown" "LPT #1" "" "" Checking the permissions for /dev/parport0, Owner, Group, and Other are all set to read and write. crw-rw---- 1 root lp 6, 0 Sep 9 16:37 /dev/lp0 crw-rw-rw- 1 root lp 99, 0 Sep 9 16:37 /dev/parport0 The output of the command lpinfo -v includes the following line: direct parallel:/dev/lp0 I've read several web postings that seem to suggest this has been a problem for several years, but the bug reports were closed because there wasn't enough information to address the issue (shades of Microsoft!). Any suggestions as to what I might be missing here?

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • Save Web Content Directly to Google Drive in Chrome [Extension]

    - by Asian Angel
    Are you looking for a quick and easy way to save images, documents, and more directly to Google Drive while browsing? Then you may want to grab a copy of the ‘Save to Google Drive’ extension for Chrome. Once you have installed the extension it is very easy to start saving all that wonderful web content to your Google Drive account via the Context Menu or the Toolbar Button as seen in the screenshot above. One thing to keep in mind is that the first time you use the extension you will be asked for permission to access your account as seen in the screenshot below. Here is a quick look at the options currently available for the extension… Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Databases supporting parallel processing across multiple servers

    - by David
    I need a database engine that can utilize multiple servers for processing a single SQL query in parallel. So far I know that this is possible with the some engines, though none of them are feasible for me either because of pricing or missing features. The engines currently known to me are: MS SQL (enterprise) DB2 (enterprise) Oracle (enterprise) GridSQL Greenplum

    Read the article

  • Parallel File Copy

    - by Jon
    I have a list of files I need to copy on a Linux system - each file ranges from 10 to 100GB in size. I only want to copy to the local filesystem. Is there a way to do this in parallel - with multiple processes each responsible for copying a file - in a simple manner? I can easily write a multithreaded program to do this, but I'm interested in finding out if there's a low level Linux method for doing this.

    Read the article

  • Dns with parallel resolution

    - by viraptor
    I'm looking for a simple caching (can be caching-only) DNS server, which can do parallel resolving on its own. Is there something like that available? Alternatively I know there's the c-ares library, which can do multiple-hosts resolution, but it's not a drop-in replacement for libresolve that I could use in the affected software. Maybe there is some other lib which can fulfill this requirement?

    Read the article

  • Is there a canonical book on parallel programming with focus on C++ ?

    - by quant_dev
    I am looking for a good book about parallel programming with focus on C++. Something suitable for a person reasonably good in C++ programming, but with no experience in concurrent software development. On the other hand, I'd prefer a practical book, without loads of silly examples about philosophers eating lunch. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on parallel programming with focus on C++ ? What about that book makes it special?

    Read the article

  • USB to LPT adapter?

    - by Dave
    I'm bummed out, pretty much all of our computers here lack parallel ports. I have an EETools ChipMax programming tool that has one of the old-school Centronics connectors on the back. I figured that someone must make a USB to LPT adapter. Sure enough, I found one from iogear, the GUC1284B that is a USB to Parallel Printer cable. Note the boldface on the Printer. It must connect to a printer -- it isn't some generic USB to parallel interface, unfortunately. Does anyone here know of an adapter that works for parallel devices that aren't printers? I'd hate to have to buy a USB version of the ChipMax when I don't need to use it very much.

    Read the article

  • Parallel task in C# 4.0

    - by Jalpesh P. Vadgama
    In today’s computing world the world is all about Parallel processing. You have multicore CPU where you have different core doing different work parallel or its doing same task parallel. For example I am having 4-core CPU as follows. So the code that I write should take care of this.C# does provide that kind of facility to write code for multi core CPU with task parallel library. We will explore that in this post. Read More

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Parallel port recording to file on Win XP

    - by Nikola Kotur
    Hi there. I need to write a simple program that records all the input from parallel port into a file. Data flows from industrial machine, setup is fairly simple, but I can't find any good open source examples on parallel port reading for Windows. Do you know a software that does this (and lets me learn how to do it myself), or is there any guideline for parallel port programming on XP? Thanks.

    Read the article

  • OpenGL extensions available on different Android devices

    - by MH114
    I'm in the process of writing an OpenGL ES powered framework for my next Android game(s). Currently I'm supporting three different techniques of drawing sprites: the basic way: using vertex arrays (slow) using vertex-buffer-objects (VBOs) (faster) using the draw_texture extension (fastest, but only for basic sprites, i.e. no transforming) Vertex arrays are supported in OpenGL ES 1.0 and thus in every Android-device. I'm guessing most (if not all) of the current devices also support VBOs and draw_texture. Instead of guessing, I'd like to know the extensions supported by different devices. If majority of devices support VBOs, I could simplify my code and focus only on VBOs + draw_texture. It'd be helpful to know what different devices support, so if you have an Android-device, do report the extensions list please. :) String extensions = gl.glGetString(GL10.GL_EXTENSIONS); I've got a HTC Hero, so I can share those extensions next.

    Read the article

  • Reasons for Parallel Extensions working slowly

    - by darja
    I am trying to make my calculating application faster by using Parallel Extensions. I am new in it, so I have just replaced the main foreach loop with Parallel.ForEach. But calculating became more slow. What can be common reasons for decreasing performance of parallel extensions? Thanks

    Read the article

  • Anyone have a database of file extensions & icons to go with the extensions?

    - by neddy
    Ok, im developing some software which requires file icons to display lists of files on a computer... i don't want to use the system ExtractAssociatedIcon api's i'd rather load the icons for the file extensions out of a database... (as some systems may not have certain files associated etc)... Does anyone have a database of file extensions & icons to go with the extensions that i can use? Cheers in advance,

    Read the article

  • Parallel software?

    - by mavric
    What is the meaning of "parallel software" and what are the differences between "parallel software" and "regular software"? What are its advantages and disadvantages? Does writing "parallel software" require a specific hardware or programming language ?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >