Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 38/157 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • DFS - Stop sync of large folder that has since been removed

    - by g18c
    We have a site to site DFSR on Windows Server 2008 R2 that has been running perfectly between site A to site B until someone dumped a 20GB folder. This has overwhelmed the upload and make the internet almost useless at site A (the upload is low at the branch office). We have removed this folder from the DFS share on site A, however the internet is still really slow. Is there any way to cancel this sync or other way to get DFSR back in to a happy state?

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • Users will be kicked out of a network drive (DFS)

    - by user71563
    Hi, In early January 2011, we completely switched to Windows Server 2008 R2 and Windows 7. On our domain controller set up a DFS is that the users as "Z: drive" is displayed. The DFS was it in the same way during our time with Windows Server 2003 R2 and Windows XP. At the time it has always worked without problems. Since Windows 7, we have sometimes the case that when a user accesses to the Z drive, the Explorer will return to the workplace without a user can do. After two to three trials of the Explorer remains in the network drive and the users work. This phenomenon occurs irregularly and you can not restrict exactly why. In the event log at the time no obvious entries are logged. Does anyone know the problem or has had similar experiences? I am grateful for any help. Greetings, sY!v3Rs

    Read the article

  • Problems when loop over a series of ssh-ed commands

    - by Jack Medley
    I have a series of server machines which I want to run the same command on. Each command takes hours and (even though I am running the commands using nohup and setting them to run in the background) I have to wait for each to finish before the next starts. Here is roughly how I have set it up: On the host machines: for i in {1..9}; do ssh RemoteMachine${i} ./RunJobs.sh; done Where RunJobs.sh on each remote machine is: source ~/.bash_profile cd AriadneMatching for file in FileDirectory/Input_*; do nohup ./Executable ${file} & done exit Does anyone know of a way such that I dont have to wait for each job to finish before the next starts? Or alternatively a better way of doing this, I have a feeling what I am do is fairly sub-optimal. Cheers, Jack

    Read the article

  • How can I run my program on a large number of computers? [closed]

    - by zenpoy
    I'm looking for a (preferably free) service for running an executable I wrote? It's not malicious, it's not a virus, it's not scam, and if this is really important I can upload the python source code instead. I wrote a small crawler to gather information regarding the style of web pages for my MA project, and I need a lot more data. EDIT Here is more information on my problem and how I approach on solving it, and where I'm stuck. As part of my research I'm trying to classify text based on it's style (font-family for now), my data is based web pages, so I wrote a client/server application - the client is a crawler that gathers this data and send it to the server. The problem is that like 99% of the internet is Arial, Verdana and Helvetica - other fonts are far more rare, so I need to spend very long time to gather enough data regarding these fonts. Hope this explains it.

    Read the article

  • Load-balanced Linux server across internet?

    - by LinuxGnut
    I'm investigating setting up a load balanced server solution consisting of three CentOS 5.4 boxes. Two of these boxes will reside in one facility, while a third will reside in a different facility. I'm currently working to set up heartbeat, ldirectord, ipvsadm to load-balance the machines, but I'm not sure its going to work with I'm not overly familiar with the details behind how all of these work, but is the load balancing going to work correctly when these servers are not all on the same LAN? I'm not sure if heartbeat is using SNMP to send signals or not, which would only work over a LAN. Has anyone tried this or found a different solution?

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • git in non-distributed, independent, lone programming ...best practice(s) ?

    - by explorest
    I am currently studying the git documentation to get a hang of distributed version control workflow and use of git command line. I want to first start using git with small, personal, pet projects so to gain experience before doing it on large scale (i.e., bigger projects, team dev). What areas of the git system should I, as a lone player, devote most of my study time to... what parts should I leave for the larger scale work later on. In other words what features of the git system will fully be grasped in team work only, and therefore should not be too involved with at an individual level?

    Read the article

  • scrapy - python question

    - by tom smith
    Hi.. Maybe not the correct place to post. But, I'm going to try anyway! I've got a couple of test python parsing scripts that I created. They work enough for me to test what I'm working on. However, I recently came across the python framework, Scrapy, which is used for web scraping. My app runs in a distributed process, across a testbed of multiple servers. I'm trying to understand scrapy, to see if it provides benefits over what I'm doing. So, if possible, I'd really like to talk with a few people who are grounded in/or who use scrapy. Thanks -tom [email protected]

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay

    Read the article

  • What is the difference between Java RMI and JMS?

    - by Sanoj
    When designing an distributed application in Java there seams to be a few technologies that address the same kind of problem. I have briefly read about Java Remote Method Invocation and Java Message Service, but it is hard to really see the difference. Java RMI seams to be more tightly coupled than JMS because JMS use asynchronous communication, but otherwise I don't see any big differences. What is the difference between them? Is one of them newer than the other one? Which one is more common/popular in enterprises? What advantages do they have over eachother? When is one preferred over the other? Do they differ much in how difficult they are to implement? I also think that Web Services and CORBA address the same problem.

    Read the article

  • ZooKeeper and RabbitMQ/Qpid together - overkill or a good combination?

    - by Chris Sears
    Greetings, I'm evaluating some components for a multi-data center distributed system. We're going to be using message queues (via either RabbitMQ or Qpid) so agents can make asynchronous requests to other agents without worrying about addressing, routing, load balancing or retransmission. In many cases, the agents will be interacting with components that were not designed for highly concurrent access, so locking and cross-agent coordination will be needed to avoid race conditions. Also, we'd like the system to automatically respond to agent or data center failures. With the above use cases in mind, ZooKeeper seemed like it might be a good fit. But I'm wondering if trying to use both ZK and message queuing is overkill. It seems like what Zookeeper does could be accomplished by my own cluster manager using AMQP messaging, but that would be hard to get really right. On the other hand, I've seen some examples where ZooKeeper was used to implement message queuing, but I think RabbitMQ/Qpid are a more natural fit for that. Has anyone out there used a combination like this? Thanks in advance, -Chris

    Read the article

  • what is a data serialization system?

    - by Yang
    according to Apache AVRO project, "Avro is a serialization system". By saying data serialization system, does it mean that avro is a product or api? also, I am not quit sure about what a data serialization system is? for now, my understanding is that it is a protocol that defines how data object is passed over the network. Can anyone help explain it in an intuitive way that it is easier for people with limited distributed computing background to understand? Thanks in advance!

    Read the article

  • Efficient multiple services exposed over c# .NET remoting using more channels or end points?

    - by wb
    I am using remoting over TCP for a prototype distributed server application where I want to have varying multiple services exposed from each remoting server process. In some cases I want the services running from the same process but I don't want whatever is using the service to care about that. I am wondering is it more efficient to have multiple services in the same process going over the same remoting channel distinguished by endpoint URI/URL or should I be creating new channels on different ports for each service in the same process? Using up ports isn't so much of a problem as the number of services will be low and the network and machine configuration is completely controlled. Also its not clear to me if remoting sends the URI string for every single message or just at connection time, and whether if the remoting framework is intelligent enough to reduce work if calls are made on the same machine and even the same process? Thanks in advance.

    Read the article

  • MPI Large Data all to all transfer

    - by csslayer
    My application of MPI has some process that generate some large data. Say we have N+1 process (one for master control, others are workers), each of worker processes generate large data, which is now simply write to normal file, named file1, file2, ..., fileN. The size of each file may be quite different. Now I need to send all fileM to rank M process to do the next job, So it's just like all to all data transfer. My problem is how should I use MPI API to send these files efficiently? I used to use windows share folder to transfer these before, but I think it's not a good idea. I have think about MPI_file and MPI_All_to_all, but these functions seems not to be so suitable for my case. Simple MPI_Send and MPI_Recv seems hard to be used because every process need to transfer large data, and I don't want to use distributed file system for now.

    Read the article

  • RMI no such object in table, Server communication error

    - by ben-casey
    My goal is to create a Distributed computing program that launches a server and client at the same time. I need it to be able to install on a couple of machines and have all the machines communicating with each other, i.e. Master node and 5 slave nodes all from one application. My problem is that I cannot properly use unicastRef, I'm thinking that it is a problem with launching everything on the same port, is there a better way I am overlooking? this is part of my code (the part that matters) try { RMIServer obj = new RMIServer(); obj.start(5225); } catch (Exception e) { e.printStackTrace(); } try { System.out.println("We are slave's "); Registry rr = LocateRegistry.getRegistry("127.0.0.1", Store.PORT, new RClient()); Call ss = (Call) rr.lookup("FILLER"); System.out.println(ss.getHello()); } catch (Exception e) { e.printStackTrace(); } } this is my main class (above) this is the server class (below) public RMIServer() { } public void start(int port) throws Exception { try { Registry registry = LocateRegistry.createRegistry(port, new RClient(), new RServer()); Call stuff = new Call(); registry.bind("FILLER", stuff); System.out.println("Server ready"); } catch (Exception e) { System.err.println("Server exception: " + e.toString()); e.printStackTrace(); } } I don't know what I am missing or what I am overlooking but the output looks like this. Listen on 5225 Listen on 8776 Server ready We are slave's Listen on 8776 java.rmi.NoSuchObjectException: no such object in table at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:359) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at Main.main(Main.java:62) line 62 is this ::: Call ss = (Call) rr.lookup("FILLER");

    Read the article

  • What kind of storage with two-way replication for multi site C# application?

    - by twk
    Hi I have a web-based system written using asp.net backed by mssql. A synchronized replica of this system is to be run on mobile locations and must be available regardless of the state of the connection to the main system (few hours long interruptions happens). For now I am using a copy of the main web application and a copy of the mssql server with merge replication to the main system. This works unreliably, and setting the replication is a pain. The amount of data the system contains is not huge, so I can migrate to different storage type. For the new version of this system I would like to implement a new replication system. I am considering migration to db4o for storage with it's replication support. I am thinking about other possible solutions like couchdb which had native replication support. I would like to stay with C#. Could you recommend a way to go for such a distributed environment? PS. Master-Slave replication is not an option: any side must be allowed to add/update data.

    Read the article

  • Start with remoting or with WCF

    - by Sheldon
    Hi. I'm just starting with distributed application development. I need to create (all by myself) an enterprise application for document management. That application will run on an intranet (within the firewall, no internet access is required now, BUT is probably that will be later). The application needs to manage images that will be stored within MySQL Server (as blobs) and those images will be then recovered by the app and eventually one or more of them will be converted to PDF. Performance is the most important non-functional requirement. I have a couple of doubts. What do you suggest to use, .NET Remoting or WCF over TCP-IP (I think second one is the best for the moment I need to expose the business logic over internet, changing the protocol). Where do you suggest to make the transformation of the images to pdf files, I'm using iText. (I have thought to have the business logic stored within the IIS and exposed via WCF, and that business logic to be responsible of getting the images and transforming them to PDF, that because the IIS and the MySQL Server are the same physical machine). I ask about where to do the transformation because the app must be accessible from multiple devices, and for example, for mobile devices, the pdf maybe is not necessary. Thank you very much in advance.

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • What library can I use to do simple, lightweight message passing?

    - by Mike
    I will be starting a project which requires communication between distributed nodes(the project is in C++). I need a lightweight message passing library to pass very simple messages(basically just strings of text) between nodes. The library must have the following characteristics: No external setup required. I need to be able to get everything up-and-running in my code - I don't want to require the user to install any packages or edit any configuration files(other than a list of IP addresses and ports to connect to). The underlying protocol which the library uses must be TCP(or if it is UDP, the library must guarantee the eventual receipt of the message). The library must be able to send and receive arbitrarily large strings(think up to 3GB+). The library needn't support any security mechanisms, fault tolerance, or encryption - I just need it to be fast, simple, and easy to use. I've considered MPI, but concluded it would require too much setup on the user's machine for my project. What library would you recommend for such a project? I would roll my own, but due to time constraints, I don't think that will be feasible.

    Read the article

  • Transferring HashMap between client and server using Sockets (JAVA)

    - by sar
    I am working on a JAVA project in which there are multiple terminals. These terminals act as client and servers. For example if there are 3 terminals A,B and C.Then at any given point in time one of them say A, will be a client making broadcast request. The other two terminals, B and C, will reply. I am using sockets to make them communicate. Once the reply is received from all the other terminals A will check the pool of channels to see if any one of the channel is free. It takes up the free channel and making it availabilty false. The channelpool is implemented using HashMAp: HashMap channelpool = new HashMap(); channelpool = 1=true, 2=true, 3=false, 4=true, 5=true, 6=true, 7=true, 8=true, 9=true, 10=true So initially all the channels are true, any terminal can take any channel. But once the channel is taken it is set to false for the period of use and then reset to true. Now this Hashmap has to be shared among the distributed terminals. Also it should be kept up to date. I can not used a shared resource among the terminals to store the HashMap.Can someone tell me an easy way to transfer the HashMap between the terminals. I will appreciate if someone could point me to a website which discusses this.

    Read the article

  • Prevent your Silverlight XAP file from caching in your browser.

    - by mbcrump
    If you work with Silverlight daily then you have run into this problem. Your XAP file has been cached in your browser and you have to empty your browser cache to resolve it. If your using Google Chrome then you typically do the following: Go to Options –> Clear Browsing History –> Empty the Cache and finally click Clear Browsing data. As you can see, this is a lot of unnecessary steps. It is even worse when you have a customer that says, “I can’t see the new features you just implemented!” and you realize it’s a cached xap problem.  I have been struggling with a way to prevent my XAP file from caching inside of a browser for a while now and decided to implement the following solution. If the Visual Studio Debugger is attached then add a unique query string to the source param to force the XAP file to be refreshed. If the Visual Studio Debugger is not attached then add the source param as Visual Studio generates it. This is also in case I forget to remove the above code in my production environment. I want the ASP.NET code to be inline with my .ASPX page. (I do not want a separate code behind .cs page or .vb page attached to the .aspx page.) Below is an example of the hosting code generated when you create a new Silverlight project. As a quick refresher, the hard coded param name = “source” specifies the location of your XAP file.  <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/SilverlightApplication2.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We are going to use a little bit of inline ASP.NET to generate the param name = source dynamically to prevent the XAP file from caching. Lets look at the completed solution: <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) //Debugger Attached - Refresh the XAP file. param = "<param name=\"source\" value=\"" + strSourceFile + "?" + DateTime.Now.Ticks + "\" />"; else { //Production Mode param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; } Response.Write(param); %> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We add the location to our XAP file to strSourceFile and if the debugger is attached then it will append DateTime.Now.Ticks to the XAP file source and force the browser to download the .XAP. If you view the page source of your Silverlight Application then you can verify it worked properly by looking at the param name = “source” tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap?634299001187160148" /> If the debugger is not attached then it will use the standard source tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap"/> At this point you may be asking, How do I prevent my XAP file from being cached on my production app? Well, you have two easy options: 1) I really don’t recommend this approach but you can force the XAP to be refreshed everytime with the following code snippet.  <param name="source" value="ClientBin/SilverlightApplication2.xap?<%=Guid.NewGuid().ToString() %>"/> NOTE: You could also substitute the “Guid.NewGuid().ToString() for anything that create a random field. (I used DateTime.Now.Ticks earlier). 2) Another solution that I like even better involves checking the XAP Creation Date and appending it to the param name = source. This method was described by Lars Holm Jenson. <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; else { string xappath = HttpContext.Current.Server.MapPath(@"") + @"\" + strSourceFile; DateTime xapCreationDate = System.IO.File.GetLastWriteTime(xappath); param = "<param name=\"source\" value=\"" + strSourceFile + "?ignore=" + xapCreationDate.ToString() + "\" />"; } Response.Write(param); %> As you can see, this problem has been solved. It will work with all web browsers and stubborn proxy servers that are caching your .XAP. If you enjoyed this article then check out my blog for others like this. You may also want to subscribe to my blog or follow me on Twitter.   Subscribe to my feed

    Read the article

  • ASP.NET MVC 2 disable cache for browser back button in partial views

    - by brainnovative
    I am using Html.RenderAction<CartController>(c => c.Show()); on my master Page to display the cart for all pages. The problem is when I add an item to the cart and then hit the browser back button. It shows the old cart (from Cache) until I hit the refresh button or navigate to another page. I've tried this and it works perfectly but it disables the Cache globally for the whole page an for all pages in my site (since this Action method is used on the master page). I need to enable cache for several other partial views (action methods) for performance reasons. I wouldn't like to use client side script with AJAX to refresh the cart (and login view) on page load - but that's the only solution I can think of right now. Does anyone know better?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >