Search Results

Search found 2261 results on 91 pages for 'numerical computing'.

Page 25/91 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • General purpose physics engine

    - by Lucas
    Is there any general purpose physics engine that allows huge simulations of rigid bodies? I'm using PhysX from Nvidia, but the focus of this engine is game development, soft bodies. I want to know if exists physics engine that runs on top of PS3 cell processors or CUDA cores allowing massive scientific physics simulations.

    Read the article

  • "Unable to associated Elastic IP with cluster" in Eclipse Plugin Tutorial

    - by Jeffrey Chee
    Hi all, I am currently trying to evaluate AWS for my company and was trying to follow the tutorials on the web. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2241 However I get the below error during startup of the server instance: Unable to associated Elastic IP with cluster: Unable to detect that the Elastic IP was orrectly associated. java.lang.Exception: Unable to detect that the Elastic IP was correctly associated at com.amazonaws.ec2.cluster.Cluster.associateElasticIp(Cluster.java:802) at com.amazonaws.ec2.cluster.Cluster.start(Cluster.java:311) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.launch(ElasticClusterBehavior.java:611) at com.amazonaws.eclipse.wtp.Ec2LaunchConfigurationDelegate.launch(Ec2LaunchConfigurationDelegate.java:47) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:696) at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3051) at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3001) at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:300) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Then after a while, another error occur: Unable to publish server configuration files: Unable to copy remote file after trying 4 timeslocal file: 'XXXXXXXX/XXX.zip' Results from first attempt: Unexpected exception: java.net.ConnectException: Connection timed out: connect root cause: java.net.ConnectException: Connection timed out: connect at com.amazonaws.eclipse.ec2.RemoteCommandUtils.copyRemoteFile(RemoteCommandUtils.java:128) at com.amazonaws.eclipse.wtp.tomcat.Ec2TomcatServer.publishServerConfiguration(Ec2TomcatServer.java:172) at com.amazonaws.ec2.cluster.Cluster.publishServerConfiguration(Cluster.java:369) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.publishServer(ElasticClusterBehavior.java:538) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:866) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Can anyone point me to what I'm doing wrong? I followed the tutorials and the video tutorials on youtube exactly. Best Regards ~Jeffrey

    Read the article

  • Hadoop safemode recovery - taking lot of time

    - by Algorist
    Hi, We are running our cluster on Amazon EC2. we are using cloudera scripts to setup hadoop. On the master node, we start below services. 609 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode' 610 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start secondarynamenode' 611 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker' 612 613 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop dfsadmin -safemode wait' On the slave machine, we run the below services. 625 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode' 626 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker' The main problem we are facing is, hdfs safemode recovery is taking more than an hour and this is causing delays in our job completion. Below are the main log messages. 1. domU-12-31-39-0A-34-61.compute-1.internal 10/05/05 20:44:19 INFO ipc.Client: Retrying connect to server: ec2-184-73-64-64.compute-1.amazonaws.com/10.192.11.240:8020. Already tried 21 time(s). 2. The reported blocks 283634 needs additional 322258 blocks to reach the threshold 0.9990 of total blocks 606499. Safe mode will be turned off automatically. The first message is thrown in task trackers log because, job tracker is not started. job tracker didn't start because of hdfs safemode recovery. The second message is thrown during the recovery process. Is there something I am doing wrong? How much time does normal hdfs safemode recovery takes? Will there be any speedup, by not starting task trackers till job tracker is started? Are there any known hadoop problems on amazon cluster? Thanks for your help. Regards Bala Mudiam

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • Best cubicle toys for programming

    - by dlamblin
    I need some employee/co-worker Christmas gift ideas. Do you have any good cubicle toys that help you to do any of: think about programming problems solve programming problems by representing common abstractions can be directly programmed can interface with a PC based IDE to be programmed. it may present problems that can be solved, to kick start problem solving. Disallowed items are: reference material in book, pamphlet, poster or cheat-sheet form, even if it has kick ass pop-cultural references. edibles. [discuss separately] things that need their own lab-space and/or extensive tools to be worked with. So yes, Lego Mindstorms come to mind, but they aren't cubicle toys because they cost more than cubicle toys would, and they have too many losable parts. comments on the answers so far: The 20 Questions game sounds quite neat as it could get you thinking; The bean balls could be used as tokens in a problem, so I can see that working. The magnetic toys like ball of whacks or the ball-and-stick ones present hands on fun of a structural nature... now can there be a similar hands on fun toy that aids in representing a solution to a problem? The Gui Mags clearly could, but they're quite utility oriented. The AVR Butterfly is less of a toy but definitely priced attractively, cheaper and more responsive than a basic stamp. I'm not going to pick an answer; there's several great suggestions here. Thank you.

    Read the article

  • Software tools for allowing end users to reprogram interfaces

    - by iceman
    What would be the examples of commercial software products (specially web-services) which allow the end user to reprogram their user-interface? I mean end users who do not know programming and they are allowed to add more functionality. One way of doing it is allowing XML gadgets like iGoogle does. What are the other approaches and also the technologies enabling them? This would be a futuristic application like collaborative software development for users.

    Read the article

  • Hadooop map reduce

    - by Aina Ari
    Im very much new to map reduce and i completed hadoop wordcount example. In that example it produces unsorted file (with key value) of word counts. So is it possible to make it sorted according to the most number of word occurrences by combining another map reduce task to the earlier one. Thanks in Advance

    Read the article

  • Draw on screen border in Commodore 64

    - by Stefano Borini
    Ok. I hope it does not get closed because I have this curiosity since 25 years and I would love to understand the trick. In the commodore 64 the border was not addressable by the 6569 VIC. All you could do was to draw pixels in the central area, the one where the cursor moved. The border was always uniform, although you could change its color with poke 53280,color if i remember correctly. Nevertheless I clearly remember games intros where the border was featured with graphics, like it was fully addressable. I tried to understand how it worked but never got to the point. legends say it was a clever use of sprites, which could, under some circumstances, be drawn on the border, but I don't know if it's an urban legend. edit: just read this from one of the provided links Sprites were multiplexed across vertical raster lines (over 8 sprites, sometimes up to 120 sprites). Until the Group Crest released Krestage 3 in May 2007 there was the common perception that no more than 8 sprites could appear at one raster line, but assigning new Y coordinates made it reappear further down the screen. This is evil.... you beat the raster and reposition the sprite before it gets there...

    Read the article

  • 404 redirect with cloud storage

    - by Jeremy DeGroot
    I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image. We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

  • Retro video games programming

    - by SebKom
    I just watched the Super Mario Bros. -1 World glitch in youtube and I really began wondering about the code behind those games. Which language was used? What about the OS for the video games consoles? Are there any website with resources about this subject? (I am a 90s video gamer so I am particularly interested about the programming behind those games but feel free to make this a wiki and include links to resources about video games programming in general, if you want)

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • Where can I get resources for developing for Mac OS Classic?

    - by Benjamin Pollack
    I recently got bored and fired up my old Mac OS Classic emulator, and then got nostalgic for writing old-school applications for the system. So, my question: Where can I get dev tools that can still target Classic? (Ideally free, since this is just for fun, but if grabbing a used version of CodeWarrior on eBay is the best way to go, so be it.) Where can I get at least reference materials so I don't have to guess-and-check my way around Carbon/the System Toolbox? Are there any forums still running that would be open to answering old-school Mac questions for when I get stuck? This is purely for fun, so don't worry about how impractical this is. I know.

    Read the article

  • Better to build or buy a compute grid platform?

    - by James B
    I am looking to do some quite processor-intensive brute force processing for string matching. I have run my prototype in a multi-threaded environment and compared the performance to an implementation using Gridgain with a couple of nodes (also multithreaded). The performance I observed was that my Gridgain implementation performed slower to my multithreaded implementation. It could be the case that there was a flaw in my gridgain implementation, but it was only a prototype, and I thought the results were indicative. So my question is this: What are the advantages of having to learn and then build an implementation for a particular grid platform (hadoop, gridgain, or EC2 if going hosted - other suggestions welcome), when one could fairly easily put together a lightweight compute grid platform with a much shallower learning curve?...i.e. what do we get for free with these cloud/grid platforms that are worth having/tricky to implement? (Please note, I don't have any need for a data grid) Cheers, -James (p.s. Happy to make this community wiki if needbe)

    Read the article

  • A leader election algorithm for an oriented hypercube

    - by mick
    I'm stuck with some problem where I have to design a leader election algorithm for an oriented hypercube. This should be done by using a tournament with a number of rounds equal to the dimension D of the hypercube. In each stage d, with 1 <= d < D two candidate leaders of neighbouring d-dimensional hypercubes should compete to become the single candidate leader of the (d+1)-dimensional hypercube that is the union of their respective hypercubes.

    Read the article

  • Force CloudFront distribution/file update

    - by Martin
    I'm using Amazon's CloudFront to serve static files of my web apps. Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed? Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

    Read the article

  • Industry-style practices for increasing productivity in a small scientific environment

    - by drachenfels
    Hi, I work in a small, independent scientific lab in a university in the United States, and it has come to my notice that, compared with a lot of practices that are ostensibly followed in the industry, like daily checkout into a version control system, use of a single IDE/editor for all languages (like emacs), etc, we follow rather shoddy programming practices. So, I was thinking of getting together all my programs, scripts, etc, and building a streamlined environment to increase productivity. I'd like suggestions from people on Stack Overflow for the same. Here is my primary plan.: I use MATLAB, C and Python scripts, and I'd like to edit, compile them from a single editor, and ensure correct version control. (questions/things for which I'd like suggestions are in italics) 1] Install Cygwin, and get it to work well with Windows so I can use git or a similar version control system (is there a DVCS which can work directly from the windows CLI, so I can skip the Cygwin step?). 2] Set up emacs to work with C, Python, and MATLAB files, so I can edit and compile all three at once from a single editor (say, emacs) (I'm not very familiar with the emacs menu, but is there a way to set the path to the compiler for certain languages? I know I can Google this, but emacs documentation has proved very hard for me to read so far, so I'd appreciate it if someone told me in simple language) 3] Start checking in code at the end of each day or half-day so as to maintain a proper path of progress of my code (two questions), can you checkout files directly from emacs? is there a way to checkout LabVIEW files into a DVCS like git? Lastly, I'd like to apologize for the rather vague nature of the question, and hope I shall learn to ask better questions over time. I'd appreciate it if people gave their suggestions, though, and point to any resources which may help me learn.

    Read the article

  • Which number of processes will give me the best performance ?

    - by Maarten
    I am doing some expensive caluations right now. It is one programm, which I run several instances of at the same time. I am running them under linux on a machine with 4 cpus with 6 cores each. The cpus are Intel Xeon X5660, which support hyper thearting. (That's some insane hardware, huh?) Right now I am running 24 processes at once. Would it be better to run more, b/c of HT ?

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >