Search Results

Search found 10417 results on 417 pages for 'large'.

Page 121/417 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Sudden increase in available hard drive space

    - by Faken
    I was copying large files over to my computer for video editing and keeping an eye on available hard drive space I knew I should have had just enough space to put everything. However, as I was getting close to filling up my hard drive, the available free space suddenly jumped by about 30GB. Any idea what happened?

    Read the article

  • apache/nginx html file size limit

    - by Daniel
    When serving/sending HTML files to a users browser, where can I reconfigure this size limit? I want to send an extremely large html files to users via apache and nginx. Files are being truncated in apache/nginx, what setting determines the file size?

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Addons which actually make Firefox run faster?

    - by Zombies
    I would like to know of addons which actually enhance firefox's performance, both intentionally and unintentionally. I find that firefox tends to have major performance issues with certain websites. These websites tend to have a fair amount of javascript and css, and probably a large dom tree which may even be growing dynamically through javascript too. The worse offenders are those with heavy javascript, use heavy facebook integration, websites with non performant javascript, excessive javascript and websites with too many advertisements.

    Read the article

  • What's a good Text Expander software for windows?

    - by chris.w.mclean
    What's a good text expander out there for windows? Ideally it needs to work w/ MS Word, needs to be configurable in how it gets triggered, (i.e. the string hdt when followed by a space gets transformed into Help Desk Ticket, but hdt gets ignored). And needs to have an import option where a large list of tags & expansions can be loaded. Plugins for UltraEdit/Notepad++ would also be acceptable.

    Read the article

  • MCollective alternative?

    - by WinkyWolly
    I really want to run MCollective on my fleet of servers however there are a large number of untrusted users on each machine which makes using MCollective not ideal in my eyes. I'm aware that there is some things you can do to take precaution but I'm not familiar enough with ActiveMQ / want something that's a bit more mindful of similar environments to mine outside the box. I'm looking for a fact collection like tool essentially. (Tagging under puppet / server since no mcollective tag and I don't have enough reputation to create a new one)

    Read the article

  • How to convert a really big HTML file to PDF in Windows

    - by PeterStrange
    We have a few really large HTML files (60-100 MB) that we cannot convert to PDF with any reliability. Adobe Acrobat 9 crashes - hits the 2GB limit for applications. Open Office converts, but removes some of the anchors (). ActivePDF webgrabber crashes. Is using a 64 bit situation an option for this type of thing? I see a bunch of options out there, but can they do better than Adobe Acrobat 9 itself?

    Read the article

  • scp stalls and ssh sessions freeze up (but eventually start again)

    - by coleifer
    I am running ubuntu on various computers on a home network. Some are on 9.04x64, some 10.04x64 and one 9.04x32. Running scp with a large file starts out at 2.1 mbps and drops down to about 200k, stalling and dropping until the transfer is complete. I've noticed this when I have a secure shell open on any of these servers as well. I have tried this with 2 different routers, both brand new, different brands.

    Read the article

  • Is it possable to invert the image output on a computer for a rear projection screen?

    - by Faken
    Hello everyone, Is there a way to flip/invert/mirror a video out on a computer with a on board Intel video card? I'm making a large rear projection screen and i need to invert the image to project properly. One of my projectors has a function that automatically inverts the image, but the other one may or may not (likely not) and i need two projectors to drive the system so I'm hoping to do the inversion on the computer side before projecting.

    Read the article

  • Off-the-shelf solutions for migrating data from azure blobstorage to rackspace cloud files

    - by S.C.
    I have large amounts of data (500+ GB) stored in azure blob storage that I need to transfer to rackspace cloud files. I know it is possible to perform such a migration using the SDKs from both services, but is there a free, standard, 1-step process for doing this? I've built a POC utility but would like to avoid having to optimize it to perform the transfer within a reasonable amount of time. Thanks.

    Read the article

  • Broken pipe on nginx

    - by schneck
    Hi there, I set up php/fastcgi with nginx and now I want to upload very large files via a java-applet. After about 30 seconds, the applet reports a "Broken pipe". In the server logs, i find nothing. I changed any setting in the php.ini (max_execution_time, max_input_time, memory_limit, post_max_size) to very high values, but nothing helps. Any idea?

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Looking for a tagging media library application for Windows

    - by E3 Group
    I'm looking for a program that can: 1) index specific folders and capture video, music and picture files. 2) allow me to assign tags or categories to these files 3) allow me to search by tags or filenames I have a large collection of movies, music, etc that I want to categorise and tag with multiple tags. Haven't yet been able to find any applications that will do this for me.

    Read the article

  • When running Adobe Acrobat's OCR on a PDF document, which downsampling produces a higher quality: 600 dpi or 72 dpi?

    - by Ricardo Altamirano
    I have a large PDF document that consists of scanned pages of a textbook. I want to run Adobe Acrobat 9's text recognition function on it, but I'm presented with this menu when I do. I'm confused by the options in the highlighted menu. What option will produce the highest quality/most readable text? I thought 600 dpi implies a higher quality image than 72 dpi, so I'm confused by "High (72 dpi)" and "Lowest (600 dpi)."

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • Backing up oracle to TAPE

    - by andreas
    Hi folks, our Oracle database has grown very large as of late ~= 400 - 500 GB and saving to filesystem is not scalable anymore to us. We are looking at using RMAN to backup to tape (directly, not to fs then tape). Anyone can shed a light on this please?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >