Search Results

Search found 70336 results on 2814 pages for 'file history'.

Page 23/2814 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Best Solution for Load Balancing NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • The Evolution of Search: A History of Google Search [Video]

    - by Jason Fitzpatrick
    Internet search has changed enormously in the last decade; this video tour of Google evolving search strategies shows us where we’ve been and where we’re going. In the above video Google staff reflect on the last decade of search, innovations at Google, and where they’re taking the search engine experience in the future. While the video clearly has a Google bias (they produced it after all) it’s still an interesting look at how Google and internet search as a whole have changed over the years. The Evolution of Search in Six Minutes [The Official Google Blog] How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast!

    Read the article

  • Puppet file transfer slow

    - by Noodles
    I have a puppet master and slaves in different datacenters. The latency between them is ~40ms. When I run "puppet agent --test" on a slave to apply the latest manifest it takes ~360 seconds to finish. After doing some digging I can see the main cause of the slow down is file transfers. It seems it's taking ~10 seconds to transfer each file. The files are only small (configuration files) so I can't understand why they would take so long. This is an example of a file in my manifest: file { "/etc/rsyncd.conf" : owner => "root", group => "root", mode => 644, source => "puppet:///files/rsyncd/rsyncd.conf" } Running puppet-profiler I see this: 10.21s - File[/etc/rsyncd.conf] It also seems I cannot update more than one server at once using puppet. If I run two servers at the same time then puppet takes twice as long. I have changed the puppet master from using webrick to mongrel, but this doesn't seem to help. This is making deploying changes painful. A simple config change can take an hour to roll out to all servers.

    Read the article

  • Corrupt file indicative of corrupt hard drive?

    - by Elipsicon
    I have noticed that two files on my (almost full) 2 TB hard drive have been corrupted. One file has 20 kB (!) corrupted, i.e. consecutive 20 kB have changed, even though the modification date of the file hasn't changed and I haven't worked with this file for over a year. This tells me that something "below" the file system level has messed with the data and the only thing I can think of is hardware failure, most likely hard disk failure. I've tested my RAM already and it works flawlessly. I'm using ext4 on Linux, if that is of any help. Is this normal? Is it time to change my hard drive disk before something worse happens? What can I do to prevent that from happening in the future? Is there some built-in feature of, or extension to ext4 that includes additional error correcting code and/or watches files for changes that haven't been caused by the OS?

    Read the article

  • Windows File System Analysis

    - by bouvierr
    I am looking for a FREE tool to perform analyses on the NTFS file system of my Windows 7 PC. I want to easily see the amount of data distributed throught out the entire file system. The following applications seem very good, but they are not free and probably overkill for my requirements: FolderSizes 5 MailMeter Windows File System Reporting Tool I am aware that some applications (like Folder Size 2.5) can add a column in Windows Explorer to show the size of each folder, but I am looking for something more like a reporting tool. Thank you for your suggestions.

    Read the article

  • OSX: Selecting default application for all unknown and different file types (extensions)

    - by Leo
    I work in cluster computing and am using Mac OS X 10.6. I send off hundreds of computing jobs a day, and each one comes back with with a different extension. For example, svmGeneSelect.o12345 which is the std output of my svmGeneSelect job which is job number 12345. I don't control the extensions. All files are plain text. I want OSX to open any file extension that it hasn't seen before with my favorite text editor when I click on it. Or even better set up file association defaults for extension patterns ie textEdit for extensions matching *.o*. I do NOT want to create file associations for individual files since this extension will only ever exist once, and I do not want to go through the process of selecting the application to use for each file. Thanks for any help you can offer.

    Read the article

  • Difference between sh file.sh and file.sh

    - by RAS
    I have two questions : What is the difference between executing sh filename.sh and filename.sh? How can I make both of them giving me the same output ? I'm asking this question as right now I'm facing a problem. I'm trying to run a Java + SWT application from terminal. When I do filename.sh, it gives me the desired output. But when I do sh filename.sh or bash filename.sh, it throws me an error : Exception in thread "main" java.lang.NoClassDefFoundError: MainForm/java Caused by: java.lang.ClassNotFoundException: MainForm.java at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: MainForm.java. Program will exit. I know this question is already asked here but I'm still not clear about it. I have gone through the following links : What is the difference between ./ and sh to run a script? Can scripts run even when they are not set as executable? Can anyone help me with this?

    Read the article

  • Unable to write into character device file in Ubuntu

    - by Surjya Narayana Padhi
    I just written a linux character driver. I created one character device file named X. I can see that file in /dev folder. Now I want to do some read/write operation into this file. I opened the filed in VI editor and write some text into it. I used :wq and exited. It didn't show any error. Now when I do cat on that same file I am not able to see any content. I tried it several times. The same situation. Please let me know If I am doing something wrong....

    Read the article

  • Enable File sharing in Windows Vista

    - by LiveEn
    There seems to be a problem in my Windows Vista.. In the network and sharing centre only the network discovery is visible. I cant find a option for file sharing as mentions in other websites. There is no folder sharing option on any folder. Can someone please tell me how to enable file sharing in my Windows Visa? i cant share any of my file in the network.

    Read the article

  • Why did Alan Kay say, "The Internet was so well done, but the web was by amateurs"?

    - by kalaracey
    OK, so I paraphrased. The full quote: The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. -- Alan Kay. I am trying to understand the history of the Internet and the web, and this statement is hard to understand. I have read elsewhere that the Internet is now used for very different things than it was designed for, and so perhaps that factors in. What makes the Internet so well done, and what makes the web so amateurish? (Of course, Alan Kay is fallible, and no one here is Alan Kay, so we can't know precisely why he said that, but what are some possible explanations?) *See also the original interview*.

    Read the article

  • AFP, SMB, NFS which is the best data transfer protocol ?

    - by Kami
    I have a computer with large hard disks running Gentoo. I have to serve med/big files via a wired network to Apple devices (all of them running OS X). Which protocol is the best for the following needs ? : Speed Ease of use (by the clients and the server) Less limited (max file size, limited charset for filenames) Security

    Read the article

  • 'No such file or directory' error in bash, but the file exists?

    - by michael
    On Ubuntu, I get a 'No such file or directory' error when I try to execute a command. I have checked with 'ls -la' , the file 'adb' is there and it has 'x' flag So why I am getting a 'No such file or directory'? ~/Programs/android-sdk-linux_x86/platform-tools$ ./adb bash: ./adb: No such file or directory ~/Programs/android-sdk-linux_x86/platform-tools$ ls -la total 34120 drwxrwxr-x 3 silverstri silverstri 4096 2011-10-08 18:50 . drwxrwxr-x 8 silverstri silverstri 4096 2011-10-08 18:51 .. -rwxrwxr-x 1 silverstri silverstri 3764858 2011-10-08 18:50 aapt -rwxrwxr-x 1 silverstri silverstri 366661 2011-10-08 18:50 adb -rwxrwxr-x 1 silverstri silverstri 906346 2011-10-08 18:50 aidl -rwxrwxr-x 1 silverstri silverstri 328445 2011-10-08 18:50 dexdump -rwxrwxr-x 1 silverstri silverstri 2603 2011-10-08 18:50 dx drwxrwxr-x 2 silverstri silverstri 4096 2011-10-08 18:50 lib -rwxrwxr-x 1 silverstri silverstri 14269620 2011-10-08 18:50 llvm-rs-cc -rwxrwxr-x 1 silverstri silverstri 14929076 2011-10-08 18:50 llvm-rs-cc-2 -rw-rw-r-- 1 silverstri silverstri 241 2011-10-08 18:50 llvm-rs-cc.txt -rw-rw-r-- 1 silverstri silverstri 332494 2011-10-08 18:50 NOTICE.txt -rw-rw-r-- 1 silverstri silverstri 291 2011-10-08 18:50 source.properties

    Read the article

  • Extracting Data from a Source System to History Tables

    - by Derek D.
    This is a topic I find very little information written about, however it is very important that the method for extracting data be done in a way that does not hinder performance of the source system.  In this example, the goal is to extract data from a source system, into another database (or server) all [...]

    Read the article

  • Bash does not remember programs with non 0 exit status in history

    - by Amigable Clark Kant
    I enter a command. It fails. I press arrow up, modify something and enter it again ... hold it right there. It used to work like that. Now it's more like: I enter a command. It fails. I press arrow up, get the last command which didn't fail, likely "ls" or something useless and I type the whole thing again back by hand. What happened? It wasn't always like this. But it's quite some time since this behavior changed, I'll give you that. Some years ago, at least. How do I put some sanity back into my bash prompt?

    Read the article

  • Reproducible file corruption for files on windows share

    - by bbuser
    We have about 40 file servers in our intranet to distribute software packages. The servers have names like example01, example02 etc. Every name resolves to a single IP-address (A-record) and the IP resolves back to that name (PTR) for every single server. The thing is, that for a certain file (mypackage.cab) I get different results depending on whether I use: \\192.0.2.01\fs\pkg\X12345678 or \\example01.foo\fs\pkg\X12345678 While in one case the file is correct in the other case the file has exactly the right size, but it is all zeros. For a certain combination of client and server I can reproduce this reliably. It doesn´t matter if I download in Windows Explorer, via robocopy or even from Linux with smbclient. It´s always the same, one file corrupt, the other ok. It happens only for certain combinations of clients and servers, not others. For example: client01 example01.foo -> OK (192.0.2.01 is also OK) client01 example02.foo -> broken (but 192.0.2.02 is OK) client02 example01.foo -> broken (but 192.0.2.01 is OK) client02 example02.foo -> OK (192.0.2.02 is also OK) client03 example06.foo -> OK (but 192.0.2.06 is broken) client03 example07.foo -> OK (192.0.2.07 is also OK) etc... In some cases I get the broken file when I use the IP address in other cases when I use the name. For every client the majority of servers is Ok, but from every client I tested I have at least 4 cases of broken files. All this happens only for mypackage.cab (about 5k in size), it never happened for any of the other files in the same directory. Confused? Certainly I am. Any idea what can cause this or any idea what to try to figure it out is welcome. Clients are Windows XP. Servers are NetApp filers I don´t have access to. I can (and will) contact the filer team again, but first I have to have an idea what is going on.

    Read the article

  • Is using something other than XML advisable for my configuration file?

    - by Earlz
    I have a small tool I'm designing which would require a configuration file of some sort. The configuration file in my case is really more of a database, but it needs to be lightweight, and if needed the end-user should find it easily editable. However, it also will contain a lot of things in it. (depending on certain factors, could be 1Mb or more) I've decided I'd rather use plain ol' text, rather than trying to use SQLite or some such. However, with using text, I also have to deal with the variety of formats. So far, my options are XML JSON Custom format The data in my file is quite simple consisting for the most part of key-value type things. So, a custom format wouldn't be that difficult... but I'd rather not have to worry about writing the support for it. I've never seen JSON used for configuration files. And XML would bloat the file size substantially I think. (I also just has a dislike of XML in general). What should I do in this case? Factors to consider: This configuration file can be uploaded to a web service(so size matters) Users must be able to edit it by hand if necessary(ease of editing and reading matters) Must be able to generate and process automatically (speed doesn't matter a lot, but not excessively slow) The "keys" and "values" are plain strings, but must be escaped because they can contain anything. (unicode and escaping has to work easily)

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >