Search Results

Search found 3409 results on 137 pages for 'distributed computing'.

Page 97/137 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How do you handle files that can't support concurrent edits in Mercurial?

    - by Scott Whitlock
    I'm using Mercurial with TortoiseHg. Each developer has their own repositories, and there's one central repository on the server for synchronizing our changes. (This will sound lame, but we're using it to manage the source for a legacy VB6 project. Nothing we can do about that...) As has been pointed out elsewhere, there is a big problem in VB6 with merging the .frx (form resources) files. So code changes seem to merge fine, but if two developers both make changes at the same time in the form design view, we can't merge. I'm ok with disallowing concurrent edits, but of course the whole point of Mercurial is that it's distributed so there is no option to force a file to be locked before editing. I don't believe there's a Mercurial solution for this, so I'm wondering: other developers who are using Mercurial for version control, do you have some 3rd party tool that assists with locking files for editing in the cases where it's necessary? Did we make a mistake using Mercurial instead of something like SVN?

    Read the article

  • Can Git or Mercurial be set to bypass the local repository and go straight to the central one?

    - by Jian Lin
    Using Git or Mercurial, if the working directory is 1GB, then the local repository will be another 1GB (at least), residing normally in the same hard drive. And then when pushed to a central repository, there will be another 1GB. Can Git or Mercurial be set to use only a working directory and then a central repository, without having 3 copies of this 1GB data? (actually, when the central repository also update, then there are 4 copies of the same data... can it be reduced? In the SVN scenario, when there are 5 users, then there will be 6GB of data total. With Distributed Version Control, then there will be 12GB of data?)

    Read the article

  • Sum of path products in DAG

    - by Jules
    Suppose we have a DAG with edges labeled with numbers. Define the value of a path as the product of the labels. For each (source,sink)-pair I want to find the sum of the values of all the paths from source to sink. You can do this in polynomial time with dynamic programming, but there are still some choices that can be made in how you decompose the problem. In my case I have one DAG that has to be evaluated repeatedly with different labelings. My question is: for a given DAG, how can we pre-compute a good strategy for computing these values for different labelings repeatedly. It would be nice if there was an algorithm that finds an optimal way, for example a way that minimizes the number of multiplications. But perhaps this is too much to ask, I would be very happy with an algorithm that just gives a good decomposition.

    Read the article

  • Make MySQL database replication always use the most free node?

    - by Chad Johnson
    We started using Multi-Master Replication Manager for MySQL, and I am wondering whether it is possible to to treat this setup like multi-symmetric processing: a process pops off the process queue, and the node (in this case a server) that is most free is selected for the job. It seems that what happens is, the service switches to a slave ONLY when it mysqld crashes or goes away. Is there a way to make database replication for MySQL act in more of a distributed manner? Maybe there is other software besides MMM that can do this? Is there a way to switch the reader role to another server whene mysqld slows down (rather than just when it fails)?

    Read the article

  • Help using Horner's rule and Hash Functions in JAVA?

    - by Matt
    I am trying to use Horner's rule to convert words to integers. I understand how it works and how if the word is long, it may cause an overflow. My ultimate goal is to use the converted integer in a hash function h(x)=x mod tableSize. My book suggests, because of the overflow, you could "apply the mod operator after computing each parenthesized expression in Horner's rule." I don't exactly understand what they mean by this. Say the expression looks like this: ((14*32+15)*32+20)*32+5 Do I take the mod tableSize after each parenthesized expression and add them together? What would it look like with this hash function and this example of Horner's rule?

    Read the article

  • Subversion for version control

    - by Gabriel Parenza
    Hi, I am working on an application whose primary purpose would be to provide source control management. My idea is to use to SVNKit for file check-out and check-in. However, while working with SVNKit, I realised it does not have the speed I was looking for. For instance, whenever developers create a ChangeRequest, which can encompass change in 3-40 files, I have to create a directory structure distributed across 32 folders. Doing so takes around 50 seconds, Another instance is that after creating change request developers can add files to the request. Copying even a single file from Trunk to branch takes around 6-7 secs. My question is has anyone had experience like this and what did you do to improve the performance? Moreover, is my approach correct? NOTE: I am using "http" protocol and can't use "svn" protocol.

    Read the article

  • Binomial test in Python

    - by Morlock
    I need to do a binomial test in Python that allows calculation for 'n' numbers of the order of 10000. I have implemented a quick binomial_test function using scipy.misc.comb, however, it is pretty much limited around n = 1000, I guess because it reaches the biggest representable number while computing factorials or the combinatorial itself. Here is my function: from scipy.misc import comb def binomial_test(n, k): """Calculate binomial probability """ p = comb(n, k) * 0.5**k * 0.5**(n-k) return p How could I use a native python (or numpy, scipy...) function in order to calculate that binomial probability? If possible, I need scipy 0.7.2 compatible code. Many thanks!

    Read the article

  • Parallelism in Python

    - by fmark
    What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism: Message passing processes, possibly distributed across a cluster, e.g. MPI. Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al Implicit shared memory parallelism, using OpenMP. Deciding on an approach to use is an exercise in trade-offs. In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets. In short, what do I need to know about the different parallelization strategies in Python before choosing between them?

    Read the article

  • Need details about applications that are running on Windows Azure

    - by veda
    I have an application which requires large amount of data storage (say some PB) and computing resources. Instead of going for clusters, I am planning to propose to use Windows Azure Cloud for this application. I have gone through white papers of Windows Azure and have collected some details about Azure. But I feel that is not substantial. I need to do some case study about applications that are running on the azure and that uses azure storage efficiently. I looked for several research paper in related to performance of the applications in Windows Azure. But as Azure was quite new, I wasn't able to find any. Now, I am looking for some white papers/details regarding application that uses azure storage to substantiate my proposal. I also need to understand the windows azure storage architecture and virtual machine architecture. Do anyone know some research papers or details or blogs or something related to these topics.

    Read the article

  • data directory in automake

    - by Alex Farber
    I have some data files that should be distributed with my program. Using dist_pkgdata_DATA in Makefile.am, I get these files installed to /usr/local/data/share/package-name. The problem is that data is read-only, and my program needs to modify it. Playing with dist_sharedstate_DATA, dist_localstate_DATA, dist-data_DATA varibles, I got different installation directories, like /usr/local/com, usr/local/var, but data is always read-only. How can I distribute modifiable data files with my package? I need some common directory for all users, or maybe local data in a user directory.

    Read the article

  • In C#, can I hide/modify accessors in subclasses?

    - by Diego
    I'm not even sure what this principle is called or how to search for it, so I sincerely apologize if it has been brought up before, but the best way to do it is with an example. class Properties { public string Name { get; set; } } class MyClass { class SubProperties: Properties { public override Name { get { return GetActualName(); } set { _value = SetActualName(value); } } } public SubProperties ClassProperties; private GetActualName() { ClassProperties.Name = "name"; } private SetActualName(string s) { ClassProperties.Name = SomeOtherFunction(s); } } The idea is to have any object that instantiates MyClass have a fully accessible property ClassProperties. To that object, it would look exactly like a Properties object, but behind the scenes, MyClass is actually computing and modifying the results of the fields. This method of declaration is obviously wrong since I can't access GetActualName() and SetActualName() from within the SubProperties definition. How would I achieve something like this?

    Read the article

  • Standard way of distributing source code?

    - by penyuan
    I am relatively new to programming, and have built a few working C++ commandline programs with Xcode in Mac OS X (no dependencies on Mac-only libraries or APIs). My question is: What is the standard way of packaging and distributing the source code (and possibly compiled binaries)? i.e. Almost all Linux programs seemed to be distributed that a user simply needs to run ./configure && make && make install from the source directory. Thank you.

    Read the article

  • C or Ada for engineering computations?

    - by yCalleecharan
    Hi,as an engineer I currently use C to write programs dealing with numerical methods. I like C as it's very fast. I don't want to move to C++ and I have been reading a bit about Ada which has some very good sides. I believe that much of the software in big industries have been or more correctly were written in Ada. I would like to know how C compares with Ada. Is Ada fast as C? I understand that no language is perfect but I would like to know if Ada was designed for scientific computing. Thanks a lot...

    Read the article

  • Scripting window positioning in OS X

    - by Matt Trent
    My primary computing setup is a macbook pro with a large external monitor. I'm looking for a convenient way of moving a number of the windows of open programs from the laptop screen to the external monitor when I plug it in. I've seen a few partial scripts for accomplishing this, but nothing comprehensive. Ideally I'd like to specify the laptop screen location and external monitor location for each window and be able to toggle between them. Is anyone aware of any existing utility or Applscripts that can perform this?

    Read the article

  • Can I use/include gsdll32.dll redmonnt.dll (no modificaitons) in a commercial application for my cli

    - by scriptmaster
    Hi - I am not a license expert, however after a lot of research, I am still struggling to answer the following questions and would like to know if my assumptions are right! 1) Is it legal to include gsdll32.dll and redmonnt.dll in a commercial product? A: ? 2) Should I release any source code of the commercial app where I am using this library? A: ? 3) Is there a commercial license (a small fee that my client can pay to use these libraries) to GhostScript? A: ? 4) Whats the meaning of "although it may be distributed ("aggregated") with commercial products." in this para: http://pages.cs.wisc.edu/~ghost/doc/AFPL/8.00/New-user.htm#Commercial_use A: ? 5) What are the alternate solutions? A: ?

    Read the article

  • Distributing WPF apps to a legacy user base: How seamless is it?

    - by Christian Nunciato
    I'm considering developing a WPF application, to be hosted by a legacy Windows app (C++), and I'm trying to get a better sense of how feasible it'll be to do so, given the broad user base I'm targeting. Knowing WPF targets .NET 3.5, I'm looking for some insight as to what the field looks like right now -- who's already got the runtime, whether it's distributed by Windows Update, if so, how (e.g., as an optional or required download, to which operating systems, etc.), whether XP pre-XP2 supports it (and how), and so on. The current version's got many thousands of users, using all manner of Windows operating systems, and while I'd very much like to leverage WPF to breathe some life into their user experience, I want to make sure I'm not shutting anyone out by doing so, or burdening them with a download they might have to do manually. I realize most, or all, of this information's out there already, in various places, but I figured I'd ask here first, since I'm sure some of you've probably already gone down this road and have valuable experiences to share. Thanks in advance!

    Read the article

  • Efficiency of while(true) ServerSocket Listen

    - by Submerged
    I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start()) I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning). The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods. I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible? Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language. Thanks everyone

    Read the article

  • Anyone up to creating a tomcat based alternative for GAE?

    - by bach
    Hi, If we had the possibility to run GAE app without any code change on our servlet engine that would be great because: in case that google changes their billing policy we can just jump to our own server or in case their current policy doesn't fit our app needs we can do stuff which is not allowed in the GAE, compromising a 1 JVM, 1 DB We don't actually need a distributed system but more of a realtime system with synchronize, true locking mechanisms, other servers/software installed on the server machine, socket interface etc... Such a package should include at least: TomCat (or equivalent) DataNucleus Access Platform (Task Queue service) Any idea if it's easy to get such a thing or if it's already exist somewhere? Thanks

    Read the article

  • DLL dependant on curllib.dll - How can I fix this?

    - by haraldo
    Hi there, I'm new to developing in C++. I've developed a dll where I'm using curllib to make HTTP requests. When running the dll via depend.exe it notifies me that my dll now depends on the curllib.dll. This simply doesn't work for me. My dll is set as a static library not shared and will be distributed on its own. I cannot rely on a user having libcurl.dll installed. I thought by including libcurl into my project this is all that would be needed and my dll could be independent. If this is impossible to resolve is there an alternative method I can use to create HTTP requests? Obviously I would prefer to use libcurl. Thanks in advance.

    Read the article

  • how random is Math.random() in java across different jvms or different machines

    - by user881480
    I have a large distributed program across many different physical servers, each program spawns many threads, each thread use Math.random() in its operations to draw a piece from many common resource pools. The goal is to utilize the pools evenly across all operations. Sometimes, it doesn't appear so random by looking at a snapshot on a resource pool to see which pieces it's getting at that instant (it might actually be, but it's hard to measure and find out for sure). Is there something that's better than Math.random() and performs just as good (not much worse at least)?

    Read the article

  • Agile development; on-line free tools!

    - by BT.
    We have been looking to implement Agile methodology within our geographically distributed development team, so i need suggestions on any free on-line application that you have used and find useful. Right now we are using paper cards and wall to manage this :), but we want to shift to an on-line version preferably free. I have used TargetProcess at my previous job! My Core requirements are: Business Analyst can add user stories We can assign, prioritize different user stories to developers. QA team can add test cases around different user stories. Project Manager can track the time of all the resources and can pull reports for upper management

    Read the article

  • What is a good Java web crawler library?

    - by DrDee
    Hi, I am about to develop a crawler in Java but don't feel like reinventing the wheel. A quick Google search gives a whole bunch of Java libraries to build a web crawler. Besides that Nutch is of course a very robust package but seems a bit too advanced for my needs. I only need to crawl a handful websites a week containing a couple of 1000 pages each. Which open source Java library would you recommend considering: speed multithreading (or even distributed) extending it with new functionality active maintained and documentation?

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • What's the coolest machine you've ever worked on?

    - by mxg
    What's the most exotic, coolest, unique, or interesting machine you've worked on? Most of us work on machines with x86 architectures using some Windows or Linux variant. I'm sure there are those of you out there who are working on or have worked on machines with experimental architecures, or operating systems. Maybe you worked on a machine that has some sigificance in the history of computing. I'd be interested to hear about it. I'm sure others reading SO will as well. EDIT: I appreciate all of you who took some time to talk about their experiences with interesting or unusual machines. I enjoyed reading your answers. Although it wasn't my intent to get nostalgic, I see that theme amongst the responses.

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >