Search Results

Search found 3521 results on 141 pages for 'parallel computing'.

Page 30/141 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How to execute a command on multiple hosts using IPv6 only?

    - by math
    First of all there is pdsh which is essentially a parallel distributed shell which may execute commands on a list of given hosts. However, I find myself in an IPv6 only problem setting. It seems that pdsh is not able to use IPv6, as I am getting error messages: pdsh -w ^hostnames my_command pdsh@myhost: gethostbyname("foobar") failed I also tried to use IPv6 addresses only, which also didn't work. So how do you run a single shell script for administrative purpose (no SGE stuff, or similar) on a bunch of hosts that is IPv6 reachable only?

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

  • WinXP: Error 1167 -- Device (LPT1) not connected

    - by Thomas Matthews
    I am writing a program that opens LPT1 and writes a value to it. The WriteFile function is returning an error code of 1167, "The device is not connected". The Device Manager shows that LPT1 is present. I have a cable connected between a development board and the PC. The cable converts JTAG pin signals to signals on the parallel port. Power is applied and the cable is connected between the development board and the PC. The development board is powered on. I am using: Windows XP MS Visual Studio 2008, C language, console application, debug environment. Here is the relevant code fragments: HANDLE parallel_port_handle; void initializePort(void) { TCHAR * port_name = TEXT("LPT1:"); parallel_port_handle = CreateFile( port_name, GENERIC_READ | GENERIC_WRITE, 0, // must be opened with exclusive-access NULL, // default security attributes OPEN_EXISTING, // must use OPEN_EXISTING 0, // not overlapped I/O NULL // hTemplate must be NULL for comm devices ); if (parallel_port_handle == INVALID_HANDLE_VALUE) { // Handle the error. printf ("CreateFile failed with error %d.\n", GetLastError()); Pause(); exit(1); } return; } void writePort( unsigned char a_ucPins, unsigned char a_ucValue ) { DWORD dwResult; if ( a_ucValue ) { g_siIspPins = (unsigned char) (a_ucPins | g_siIspPins); } else { g_siIspPins = (unsigned char) (~a_ucPins & g_siIspPins); } /* This is a sample code for Windows/DOS without Windows Driver. */ // _outp( g_usOutPort, g_siIspPins ); //---------------------------------------------------------------------- // For Windows XP and later //---------------------------------------------------------------------- if(!WriteFile (parallel_port_handle, &g_siIspPins, 1, &dwResult, NULL)) { printf("Could not write to LPT1 (error %d)\n", GetLastError()); Pause(); return; } } If you believe this should be posted on Stack Overflow, please migrate it over (thanks).

    Read the article

  • How do I get Java to use my multi-core processor?

    - by Rudiger
    I'm using a GZIPInputStream in my program, and I know that the performance would be helped if I could get Java running my program in parallel. In general, is there a command-line option for the standard VM to run on many cores? It's running on just one as it is. Thanks! Edit I'm running plain ol' Java SE 6 update 17 on Windows XP. Would putting the GZIPInputStream on a separate thread explicitly help? No! Do not put the GZIPInputStream on a separate thread! Do NOT multithread I/O! Edit 2 I suppose I/O is the bottleneck, as I'm reading and writing to the same disk... In general, though, is there a way to make GZIPInputStream faster? Or a replacement for GZIPInputStream that runs parallel? Edit 3 Code snippet I used: GZIPInputStream gzip = new GZIPInputStream(new FileInputStream(INPUT_FILENAME)); DataInputStream in = new DataInputStream(new BufferedInputStream(gzip));

    Read the article

  • Does the chunk of the System.Collections.Concurrent.Partitioner need to be thread safe?

    - by Scott Chamberlain
    I am working with the Parallel libraries in .net 4 and I am creating a Partitioner and the example shown in the MSDN only has a chunk size of 1 (every time a new result is retrieved it hits the data source instead of the local cache. The version I am writing will pull 10000 SQL rows at a time then feed the rows from the cache until it is empty then pull another batch. Each partition in the Partitioner has its own chunk. I know every time I call to the IEnumerator in from the SQL data-source that needs to be thread safe but for use in a Parallel.ForEach do I need to make every call to the cache for the chunking thread safe?

    Read the article

  • pipelined function

    - by user289429
    Can someone provide an example of how to use parallel table function in oracle pl/sql. We need to run massive queries for 15 years and combine the result. SELECT * FROM Table(TableFunction(cursor(SELECT * FROM year_table))) ...is what we want effectively. The innermost select will give all the years, and the table function will take each year and run massive query and returns a collection. The problem we have is that all years are being fed to one table function itself, we would rather prefer the table function being called in parallel for each of the year. We tried all sort of partitioning by hash and range and it didn't help. Also, can we drop the keyword PIPELINED from the function declaration? because we are not performing any transformation and just need the aggregate of the resultset.

    Read the article

  • Do you know any build systems with decent support for parallelization?

    - by dahpgjgamgan
    Hi, I am looking for a build system (working on ms windows) that has good support for parallelization of tasks/targets (or whatever you call them). To be more specific - during build (that is initiated on MS Windows machine) I need to copy source files to a number of different machines (which are not necessarily running Windows) and start a remote job on each of them - and I really like to do that on all machines at once. Does anyone know a build system that's capable of executing such a task in parallel. From what I googled, the options currently available are: -j switch in make - but i don't know if nmake supports this -some custom nAnt tasks -msbuild has some form of support for parallelization - seems similiar to make (meaning you don't specify what to do in parallel, just specify that it would be nice to build things that way) -fake (f# make) is written in functional programming language which are known to have good parallelization support - but I'm not very skillful in functional programming area. Any other solutions I could explore?

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • Why is Python used for high-performance/scientific computing (but Ruby isn't)?

    - by Cyclops
    There's a quote from a PyCon 2011 talk that goes: At least in our shop (Argonne National Laboratory) we have three accepted languages for scientific computing. In this order they are C/C++, Fortran in all its dialects, and Python. You’ll notice the absolute and total lack of Ruby, Perl, Java. It was in the more general context of high-performance computing. Granted the quote is only from one shop, but another question about languages for HPC, also lists Python as one to learn (and not Ruby). Now, I can understand C/C++ and Fortran being used in that problem-space (and Perl/Java not being used). But I'm surprised that there would be a major difference in Python and Ruby use for HPC, given that they are fairly similar. (Note - I'm a fan of Python, but have nothing against Ruby). Is there some specific reason why the one language took off? Is it about the libraries available? Some specific language features? The community? Or maybe just historical contigency, and it could have gone the other way?

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • How exactly do MbUnit's [Parallelizable] and DegreeOfParallelism work?

    - by BenA
    I thought I understood how MbUnit's parallel test execution worked, but the behaviour I'm seeing differs sufficiently much from my expectation that I suspect I'm missing something! I have a set of UI tests that I wish to run concurrently. All of the tests are in the same assembly, split across three different namespaces. All of the tests are completely independent of one another, so I'd like all of them to be eligible for parallel execution. To that end, I put the following in the AssemblyInfo.cs: [assembly: DegreeOfParallelism(8)] [assembly: Parallelizable(TestScope.All)] My understanding was that this combination of assembly attributes should cause all of the tests to be considered [Parallelizable], and that the test runner should use 8 threads during execution. My individual tests are marked with the [Test] attribute, and nothing else. None of them are data-driven. However, what I actually see is at most 5-6 threads being used, meaning that my test runs are taking longer than they should be. Am I missing something? Do I need to do anything else to ensure that all of my 8 threads are being used by the runner? N.B. The behaviour is the same irrespective of which runner I use. The GUI, command line and TD.Net runners all behave the same as described above, again leading me to think I've missed something. EDIT: As pointed out in the comments, I'm running v3.1 of MbUnit (update 2 build 397). The documentation suggests that the assembly level [parallelizable] attribute is available, but it does also seem to reference v3.2 of the framework despite that not yet being available. EDIT 2: To further clarify, the structure of my assembly is as follows: assembly - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute)

    Read the article

  • [C++][OpenMP] Proper use of "atomic directive" to lock STL container

    - by conradlee
    I have a large number of sets of integers, which I have, in turn, put into a vector of pointers. I need to be able to update these sets of integers in parallel without causing a race condition. More specifically. I am using OpenMP's "parallel for" construct. For dealing with shared resources, OpenMP offers a handy "atomic directive," which allows one to avoid a race condition on a specific piece of memory without using locks. It would be convenient if I could use the "atomic directive" to prevent simultaneous updating to my integer sets, however, I'm not sure whether this is possible. Basically, I want to know whether the following code could lead to a race condition vector< set<int>* > membershipDirectory(numSets, new set<int>); #pragma omp for schedule(guided,expandChunksize) for(int i=0; i<100; i++) { set<int>* sp = membershipDirectory[5]; #pragma omp atomic sp->insert(45); } (Apologies for any syntax errors in the code---I hope you get the point) I have seen a similar example of this for incrementing an integer, but I'm not sure whether it works when working with a pointer to a container as in my case.

    Read the article

  • Parallelizing for loop

    - by vman049
    I have MATLAB code which I'm trying to parallelize with a simple change from "for" to "parfor." I'm unable to do so because of an error I'm receiving on the variable "votes" which states: Valid indices for 'votes' are restricted in PARFOR loops. Explanation: For MATLAB to execute parfor loops efficiently, the amount of data sent to the MATLAB workers must be minimal. One of the ways MATLAB achieves this is by restricting the way variables can be indexed in parfor iterations. The indicated variable is indexed in a way that is incompatible with parfor. Suggested Action: Fix the indexing. For a description of the indexing restrictions, see “Sliced Variables” in the Parallel Computing Toolbox documentation. Below is my code: votes = zeros(num_layers, size(spikes, 1), size(SVMs_layer1, 1)); predDir = zeros(size(spikes, 1), 1); chronProb = zeros([num_layers, size(chronDists)]); for i = 1:num_layers switch i case 1 B = B1; k_elem_temp = k_elem1; rest_elem_temp = rest_elem1; case 2 B = B2; k_elem_temp = k_elem2; rest_elem_temp = rest_elem2; case 3 B = B3; k_elem_temp = k_elem3; rest_elem_temp = rest_elem3; end for j = 1:length(chronPred) if chronDists(i, j, :) ~= 0 parfor k = 1:8 chronProb(i, j, k) = logistic(B{k}(1) + chronDists(i, j, k).*(B{k}(2))); votes(i, j, k_elem_temp(k, :)) = votes(i, j, k_elem_temp(k, :)) + chronProb(i, j, k)/num_k(i)/num_layers; votes(i, j, rest_elem_temp(k, :)) = votes(i, j, rest_elem_temp(k, :)) + (1 - chronProb(i, j, k))/num_rest(i)/num_layers; end end end end Do you have any suggestions as to how I could adjust my code so that it runs in parallel? Thank you!

    Read the article

  • Why don't xUnit frameworks allow tests to run in parallel?

    - by Xavier Nodet
    Do you know of any xUnit framework that allows to run tests in parallel, to make use of multiple cores in today's machine? I don't... If none (or so few) of them does it, maybe there is a reason... Is it that tests are usually so quick that people simply don't feel the need to paralellize them? Is there something deeper that precludes distributing (at least some of) the tests over multiple threads? Thanks!

    Read the article

  • SAPPHIRE NOW : SAP ouvre un site pour tester HANA, sa solution de In-Memory Computing au coeur de sa "stratégie d'innovation"

    SAPPHIRE NOW : SAP ouvre un site pour tester HANA Sa solution de In-Memory Computing au coeur de sa « stratégie d'innovation » Autre jour, autre ambiance au SAPPHIRE NOW de SAP qui se tient actuellement à Madrid. Si la présentation de Jim Hagemann Snabe, hier, était placée sous le signe de la sciences fiction, celle de Vishal Sikka, membre exécutive du Board de SAP, était aujourd'hui placé sous celui de la mythologie et de la Grèce antique. Le message, en revanche, confirmait celui introduit par le le co-PDG de la société : Cloud, Mobilité et In-Memory so...

    Read the article

  • Sortie de la version 0.7 de Apache Cassandra, le SGBD NoSQL phare du Cloud Computing, utilisé par Facebook et Twitter

    Sortie de la version 0.7 de Apache Cassandra Le SGBD NoSQL phare du Cloud Computing, utilisé par Facebook et Twitter Mise à jour du 12/01/2011 par Idelways La fondation Apache vient d'annoncer la disponibilité de la version 0.7 de Cassandra, le système de gestion de base de donnée de type NoSQL conçu pour supporter des quantités massives de données. Parmi les nouveautés de cette version, l'arrivée des Indexs Secondaires, une manière efficace d'exécuter des requêtes sur des noeud locaux de stockages côté client. Autre nouveauté, le support des « Large Row », qui permettent d'avoir jusqu'à 2 milliards de co...

    Read the article

  • Branching strategy for parallel development that won't be in the same release?

    - by Telastyn
    My team is working on a product, which for business reasons needs to be released on a regular schedule. An issue has arisen where we want to do development in parallel for the upcoming release, as well as the 'next' release. This is to become standard practice, so it's not as straightforward as cutting a feature branch for the new work. We'll continually have 2+ teams working on different releases of the same product. Is there an SCM best practice for this sort of arrangement?

    Read the article

  • Relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"?

    - by Sid
    In the context of C#, .NET 4/4.5 used for an application running on a web-server, what is the relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"? I understand one is a library and the other is a pattern. But to dig deeper, is it like "The library is used by the pattern to enforce good practices". I'm also not clear if both are supported in .NET 4.0 (with awake and async keywords) Edit: Seems that awake and async are only in .NET 4.5 ...

    Read the article

  • Oracle Cloud Computing Summit? 6/15 ?·????????????

    - by kiyoshi.nira
    ????·???????????????????????????????????????????????????????????????????????????????????????????????????·??????????????????? ??????????? ?? : 2010?6?15?(?) 10:00~16:30(????:9:30~) ?? : ?·???????????? (map) ?105-8563 ????????4-8-1 ?? : CIO?????????????IT??????????????????????????????????? ???: ??(?????) ??????????? -> Oracle Cluod computing Summit

    Read the article

  • How do you explain more advanced computing concepts to a non super user?

    - by EvilChookie
    I often have to explain computing concepts to non super users, and I often do it by relating computing concepts to real life situations. I wouldn't mind seeing how other super users do it, and some really good explanations might come in handy instead of me having to wing it. So, how do you explain advanced computing topics to the 'normal' people? Notes: One explanation per answer, and let the best float to the top. CW turned on, since this is subjective. Also, feel free to edit my tags if you can think of better ones =)

    Read the article

  • STL algorithms and concurrent programming

    - by Andrew
    Hello everyone, Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent? Thanks.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >