Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 38/101 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Connecting to an RMI object without registry

    - by Mark Probst
    I think I need to connect to a remote RMI object without going through the registry, but I don't know how. My situation is this: I'm implementing a simple job distribution service which consists of one distributor and multiple workers. The distributor has a registered RMI object to which clients connect to send jobs, and workers connect to accept jobs. Unfortunately the distributor and worker hosts are behind a firewall. To get to the distributor host I am tunneling two ports (one for the registry, one for the distributor object) via SSH, so I can get to the registry and the distributor from outside the firewall. To make that work I have to set "-Djava.rmi.server.hostname=localhost" on the distributor JVM so that the clients connect to their local, tunneled port, instead of the port on the actual distributor host, which is blocked. This creates a problem for the workers, though, because they need to connect to the distributor directly, but because of the "localhost" redirection they behave like clients and try to connect to a port on their own host, which is not available, because I'm not tunneling on the workers (it is impractical). Now, if I could connect to a remote object directly by giving the hostname and port, I could do away both with the registry on the distributor and the "localhost" hack, and make the workers connect properly. How do I do that? Or is there a different solution to this problem?

    Read the article

  • wanting a good memory + disk caching solution

    - by brofield
    I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is: memcached semantics (i.e. not reliable, just a cache) memcached api preferred (but not required) large in-memory first level cache (MRU) huge on-disk second level cache (main) evicted from on-disk cache at maximum storage using LRU or LFU proven implementation In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either: other options that I haven't considered a way to make memcachedb do evictions Already considered are: memcachedb best fit but doesn't do evictions: explicitly "not a cache" can't see any way to do evictions (either manual or automatic) tugela cache abandoned, no support don't want to recommend it to customers nmdb doesn't use memcache api new and unproven don't want to recommend it to customers

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • How do you export or release a Mac OS X program made in Xcode? Program does not load on other comput

    - by SolidSnake4444
    I made a program in Xcode, being a simple calculator that takes a first number, and a second number, and then either adds,subtracts,multiplies, or divides depending on the radio button. I build and run and the program comes up and works fine. When I went to show my friends on their macs, when you double click on the program the program pops in the tray for like .05 seconds and then disappears and we never can actually run the program. It still works perfect however on my computer. What am I doing wrong? How can I take the program I made, and run it on different macs? I have the release set to 10.5 but the active SDK to 10.6. It runs in both 10.5 and 10.6 simulators. One friend has 10.6.3 like me and the other has 10.5.x(cant remember the last part). To get the app, I changed from debug to release and set active SDK to 10.5. Then in the release folder I found the app and sent that over iChat. I feel this will be a problem in the future if I ever make a legit application to distribute. Thank you!

    Read the article

  • C++ Winsock P2P

    - by Goober
    Scenario Does anyone have any good examples of peer-to-peer (p2p) networking in C++ using Winsock? It's a requirement I have for a client who specifically needs to use this technology (god knows why). I need to determine whether this is feasible. Any help would be greatly appreciated.

    Read the article

  • How to bundle shell script and C code?

    - by eSKay
    I have a C shell script that calls two C programs - one after the another with some file handling before, in-between and afterwards. Now, as such I have three different files - one C shell script and 2 .c files. I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script. Is there some better way to do this? [I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]

    Read the article

  • How to make my Java Swing application a Client-Server application?

    - by Jonas
    I have made a Java Swing application. Now I would like to make it a Client-Server application. All clients should be notified when data on the server is changed, so I'm not looking for a Web Service. The Client-Server application will be run on a single LAN, it's a business application. The Server will contain a database, JavaDB. What technology and library is easiest to start with? Should I implement it from scratch using Sockets, or should I use Java RMI, or maybe JMS? Or are there other alternatives that are easier to start with? And is there any server library that I should use? Is Jetty an alternative?

    Read the article

  • Condor job using DAG with some jobs needing to run the same host

    - by gurney alex
    I have a computation task which is split in several individual program executions, with dependencies. I'm using Condor 7 as task scheduler (with the Vanilla Universe, due do constraints on the programs beyond my reach, so no checkpointing is involved), so DAG looks like a natural solution. However some of the programs need to run on the same host. I could not find a reference on how to do this in the Condor manuals. Example DAG file: JOB A A.condor JOB B B.condor JOB C C.condor JOB D D.condor PARENT A CHILD B C PARENT B C CHILD D I need to express that B and D need to be run on the same computer node, without breaking the parallel execution of B and C. Thanks for your help.

    Read the article

  • Can computer clusters be used for general everyday applications?

    - by Matt Pascoe
    Does anyone know how a computer cluster can be used for everyday applications, like for example video games? I would like to build a computer cluster that can run applications over the cluster that were not specifically designed for computer clusters and still see the performance increase. One use would be for video games, but I would also like to utilize the increased computing power for running a large network of virtualized machines.

    Read the article

  • Is a Service Bus a good option to communicate with multiple external servers?

    - by PFreitas
    We are developing an application that communicates, via web services or TCP/IP sockets, with multiple servers (up to 50 different external companies). Basically, the exchanged messages are the same (XML), but depending on the inputs of our application we should call 1 or more external servers. The benefits we would expect of introducing a Service Bus in the architecture would be: 1- Remove the need to manage all point-to-point configurations (all the 50 endpoints); 2- Simplify the communication layer of our application by having only one server to talk to; Is a Service Bus a good architectural option for this scenario? What is the best (simplest) Service Bus for this kind of communication? I read a few MSDN articles on Azure Service Bus Relay, but it didn’t seem to fit our needs. Am I wrong? Thanks for your help.

    Read the article

  • Parallelizing a serial algorithm

    - by user643813
    Hej folks, I am working on porting a Text mining/Natural language application from single-core to a Map-Reduce style system. One of the steps involves a while loop similar to this: Queue<Element>; while (!queue.empty()) { Element e = queue.next(); Set<Element> result = calculateResultSet(e); if (!result.empty()) { queue.addAll(result); } } Each iteration depends on the result of the one before (kind of). There is no way of determining the number of iterations this loop will have to perform. Is there a way of parallelizing a serial algorithm such as this one? I am trying to think of a feedback mechanism, that is able to provide its own input, but how would one go about parallelizing it? Thanks for any help/remarks

    Read the article

  • How to bundle C/C++ code with C-shell-script?

    - by eSKay
    I have a C shell script that calls two C programs - one after the another with some file handling before, in-between and afterwards. Now, as such I have three different files - one C shell script and 2 .c files. I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script. Is there some better way to do this? [I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]

    Read the article

  • How to bundle C code in C shell script?

    - by eSKay
    I have a C shell script that calls two C programs - one after the another with some file handling before, in-between and afterwards. Now, as such I have three different files - one C shell script and 2 .c files. I need to give this script to other users. The problem is that I have to distribute three files - which the users must keep in the same folder and then execute the script. Is there some better way to do this? [I know I can make one C code file out of those two... but I will still be left with a shell script and a C code. Actually, the two C codes do entirely different things... so I want them to be separate]

    Read the article

  • Split text files Accross threads

    - by Kevin
    The problem: I have a few text files (10) with numbers in them on every line. I need to have them split across some threads I create using the pthread library. these threads that are created (worker threads) are to find the largest prime number that gets sent to them (and over all the largest prime from all of the text files). My current thoughts on solutions: I am thinking myself to have two arrays and all of the text files in one array and the other array will contain a binary file that I can read say 1000 lines and send the pointer to the index of that binary file in a struct that contains the id, file pointer, and file position and let it crank through that. a little bit of what I am talking about pthread_create(&threads[index],NULL,calc_sqrt,(void *)threadFields[index]);//Pass struct to each worker Struct: typedef struct threadFields{ int *id, *position; FILE *Fin; }tField; If anyone has any insight or a better solution it would be greatly appreciated Thanks

    Read the article

  • How do I control output files name and content of an Hadoop streaming job?

    - by Eran Kampf
    Is there a way to control the output filenames of an Hadoop Streaming job? Specifically I would like my job's output files content and name to be organized by the ket the reducer outputs - each file would only contain values for one key and its name would be the key. Update: Just found the answer - Using a Java class that derives from MultipleOutputFormat as the jobs output format allows control of the output file names. http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.htmlhttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html I havent seen any samples for this out there... Can anyone point out to an Hadoop Streaming sample that makes use of a custom output format Java class?

    Read the article

  • Pseudorandom Number Generation with Specific Non-Uniform Distributions

    - by carnun
    Hello all, I'm writing a program that simulates various random walks (with differing distributions). At each timestep, I need randomly generated, two dimensional step distances and angles from the distribution of the random walk. I'm hoping someone can check my understanding of how to generate these random numbers. As I understand it I can use Inverse Transform Sampling as follows: If f(x) is the pdf of our random walk that has a non-uniform distribution, and y is a random number from a uniform distribution. Then if we let f(x) = y and solve to find x then we have a random number from the non-uniform distribution. Is this a feasible solution?

    Read the article

  • Oracle global_names DELETE problem

    - by jyzuz
    I'm using a database link to execute a DELETE statement on another DB, but the DB link name doesn't conform to global naming, and this requirement cannot change. Also I have global_names set to false, and cannot be changed either. When I try to use these links however, I receive: ORA-02069: - global_names parameter must be set to TRUE for this operation Cause: A remote mapping of the statement is required but cannot be achieved because GLOBAL_NAMES should be set to TRUE for it to be achieved. - Action: Issue `ALTER SESSION SET GLOBAL_NAMES = TRUE` (if possible) What is the alternative action when setting global_names=true is not possible? Cheers, Jean

    Read the article

  • DBA's say no to SQL Server DTC?

    - by NabilS
    I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this? Thanks

    Read the article

  • How to provide an API client with 1,000,000 database results?

    - by Chris Dutrow
    What is a good way to provide an API client with 1,000,000 database results? We are cureently using PostgreSQL. A few suggested methods: Paging using Cursors Paging using random numbers ( Add "GREATER THAN ORDER BY " to each query ) Save information to a file and let the client download it Iterate through results, then POST the data to the client server Return only keys to the client, then let the client request the objects from Cloud files like Amazon S3 (still may require paging just to get the file names ). What haven't I thought of that is stupidly simple and way better than any of these options?

    Read the article

  • Using Git or Mercurial, how would you know when you do a clone or a pull, no one is checking in file

    - by Jian Lin
    Using Git or Mercurial, how would you know when you do a clone or a pull, no one is checking in files (pushing it)? It can be important that: 1) You never know it is in an inconsistent state, so you try for 2 hours trying to debug the code for what's wrong. 2) With all the framework code -- potentially hundreds of files -- if some files are inconsistent with the other, can't the rake db:migrate or script/generate controller cause some damage or inconsistencies to the code base?

    Read the article

  • Mesos slave not 'Running' multiple executors simultaneously

    - by user3084164
    I am using Mesos to distribute a bunch of tasks to different machines (mesos-slaves). Here is what happens: 1. My scheduler gets resource offers and accepts it. 2. Mesos stages multiple executors on the same mesos-slaves (each slave has 4 cpus) 3. Only ONE executor enters the 'Running' state on each of the slaves while the others are shown in 'Staging' state. 4. Only after the current executor finishes execution the other executor starts running. Given that I have 4 CPUs on each machine, shouldn't each slave be running 4 executors simultaneously? Each executor requires 1 CPU.

    Read the article

  • Any way to recover ext4 filesystems from a deleted LVM logical volume?

    - by Vegar Nilsen
    The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.) And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions... After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well. Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5). /dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system. Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk. However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system. So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >