Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 53/457 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How to configure Dovecot to not serve large emails to high-latency clients?

    - by Daniel Quinn
    I have a Dovecot mailserver running at home on a flaky cable connection. For the most part, the IMAP functionality works beautifully, but I'd like to add one feature if I can: I want Dovecot not to serve large messages to high-latency clients. That is to say, if someone decides that it's a good idea to send me a 9.3mb email to me, I don't want to get it unless I'm on my LAN at home. This can't be an uncommon request, but I'm having trouble finding the configuration option in their documentation. Any ideas and/or good keywords to use in Googling would be awesome.

    Read the article

  • A simple Volume Replication Tool for large data set?

    - by Jin
    I'm looking for a solution to the following: Server A (Site A) - Win 2008 R2 - approx 10TB (15TB max) of data - well over 8 million files Server B (Site B) - Win 2008 R2 I want to assynchronously replicate Server A's volume to a volume on Server B for data redundancy. Something that I can say to my users, "go here for data" when/if Server A goes belly up due to machine problems, disaster, etc. Windows 2008 R2 does have DFS, but microsoft does not apparently support this large of a dataset (or more accurately, more than 8 million files - according to the docs I could find). I also looked at Veritas Volume Replication, but this seems almost too much as I would also require Veritas Volume Manager. There are numerous "back-up" software which makes a 1-1 backup, which would be ok, but since it will be transfering over internet, I'd like something that has compression during transfer like DFS has. Does anyone have any suggestions regarding this?

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

  • Why is Internet access and Wi-Fi always so terrible at large tech conferences?

    - by Joel Spolsky
    Every tech conference I've ever been to, and I've been to a lot, has had absolutely abysmal Wi-Fi and Internet access. Sometimes it's the DHCP server running out of addresses. Sometimes the backhaul is clearly inadequate. Sometimes there's one router for a ballroom with 3000 people. But it's always SOMETHING. It never works. What are some of the best practices for conference organizers? What questions should they ask the conference venue or ISP to know, in advance, if the Wi-Fi is going to work? What are the most common causes of crappy Wi-Fi at conferences? Are they avoidable, or is Wi-Fi simply not an adequate technology for large conferences?

    Read the article

  • Where can I find a large body of *Python3* source code?

    - by Ira Baxter
    I'm testing a Python parser. I have Python 2.6/2.7 firmly under control, and some good (large) code samples on which I've tested it. I'm interested in testing my Python3 variant. I've been to various Python open source web sites (e.g., http://pythonsource.com/), which list lots of packages, but they are pretty unclear as whether these are Python 2.x vs 3.x source files. The several samples that I downloaded all turned out to be Python 2.x. Where can I find a number of large Python 3 software source codes? I don't really want 1000 little separate Python3 files; I prefer big applications.

    Read the article

  • What is the Reason large sites don't use MySQL with ASP.NET?

    - by Luke101
    I have read this article from highscalability about stackoverflow and other large websites. Many large high traffic .NET sites such as plentyoffish.com, mysapce and SO all use .NET technologies and use SQL SERver for their database. In the article it says SO said As you add more and more database servers the SQL Server license costs can be outrageous. So by starting scale up and gradually going scale out with non-open source software you can be in a world of financial hurt. I don't understand why don't high traffic .NET sites convert their databases to MySQL as it is waay cheaper then SQL Server

    Read the article

  • Is my large Windows folder slowing down my machine?

    - by Moses
    I have a problem with my Windows installation running very slow and my Windows folder being too large. I thought that the problems are related. My Windows folder is 17.4 GB I have 1807 folders totalling 2.4 GB that are prefaced with a $. My System32 folder is 1.55 GB My Microsoft.NET folder is 654 MB – I don't know what if any programs I have that are using it. My Service Pack folder is 568 MB. The Software Distribution folder is 536 MB The ie8updates folder is 380 MB. How can I reduce the size of these folders and could their size be why I am running do slow?

    Read the article

  • Which Large File System Format to use for USB Flash drive compatible with Ubuntu/Mac/Windows?

    - by wajiw
    I've had this problem for a long time and can't find a solution. I switch between the 3 OSes all the time and use a 1TB USB Drive to do so. I can't seem to find a format that is compatible across all systems that handles large files (at least 8-9 GB). Does anyone have a solution for this? Recently I've tried exFat but that messes up the filesystem when trying to read on windows after adding files from Ubuntu (using the fuse driver). The OSes currently I'm using are Windows Vista/7, Mac OS X (10.6.5) and Ubuntu 10.10

    Read the article

  • Why does cpio say "WARNING! These file names were not selected" when copying a large number of files

    - by mmm bacon
    For over 10 years, I've been using this strategy to copy a large number of files between UNIX filesystems: cd source_directory find . -depth -print | cpio -pdm /path/to/destination_directory It works like a champ. However, I'm now getting this error from cpio: cpio: WARNING! These file names were not selected: (long list of files here...) The source directory is on OSX 10.5, and the destination directory is a NFS filesystem from an OpenSolaris server. Copying over NFS has never been a problem in the past. There's nothing strange about the filenames, meaning there aren't special characters or anything like that. Any ideas?

    Read the article

  • How do large companies handle software updates for users without administrative rights?

    - by CT
    I just started working for a small-medium size company doing IT support. Maybe 150 or less users. Right now every user has administrative rights to their own machine. This allows them to install updates or whatever else they would like to. I'm tired of getting on user's machines that are bloated with crap they put on themselves. So my first thought would be to take away administrative rights to their computer. This would also have other advantages such as preventing a lot of drive-by malware on the web etc. The problem arises that users are unable to install updates. (Even though I find most ignore these anyway) How do large companies handle software updates on all client machines? EDIT: Windows environment. Most servers are Windows Server 2003 Enterprise. Clients are all Windows. Win XP, Vista, and 7.

    Read the article

  • How well does Solr scale over large number of facet values?

    - by Continuation
    I'm using Solr and I want to facet over a field "group". Since "group" is created by users, potentially there can be a huge number of values for "group". Would Solr be able to handle a use case like this? Or is Solr not really appropriate for facet fields with a large number of values? I understand that I can set facet.limit to restrict the number of values returned for a facet field. Would this help in my case? Say there are 100,000 matching values for "group" in a search, if I set facet.limit to 50. would that speed up the query, or would the query still be slow because Solr still needs to process and sort through all the facet values and return the top 50 ones? Any tips on how to tune Solr for large number of facet values? Thanks.

    Read the article

  • File storage service that allows clients to upload large files to my account?

    - by deceze
    Can anyone recommend an online file storage service which fulfills these requirements? I can create an account I can invite clients to upload files into my account clients do not need to register to be able to upload clients must not be able to see anything but their own files or they must not see any files at all, they get only a dropbox only I can access the uploaded files, everything is non-public service is multi-lingual I just need clients to be able to send me potentially large files in a dead simple manner online, that's all. No registration step to go through, no software to download, no synching or sharing. No setting up of individual folders and permissions for each individual client. No copying and pasting of links (a la Mediafire, Rapidshare etc).

    Read the article

  • Creating a file server - How can I use a large VHD file in Hyper-V? (700GB)

    - by barfoon
    Hey everyone, After a few discussions (here, here, and here), I am still unable to create a simple VM that will be used as a fileserver hosted on my Hyper-V box. I have created a fixed 700GB SCSI drive (.vhd file), as I have learned an IDE drive of this size is not possible. Not to sound too cynical, but its blown me away at how much trouble its been to create a large amount of space and start using it. What is the best way to create a fileserver with a drive of this size hosted on Hyper-V Server 2008, and how can I get it going??? Inclusion of OS, driver, integration tools etc, anything you feel is required would be greatly appreciated. Extra information I am using the stand-alone version of Hyper-V server, and not Windows Server 2008. I have tried loading the Linux Integration Tools (linked in the comments of the last link above) onto a SUSE 11 VM and the installation fails, the machine cannot see the vhd at all. Thanks very much,

    Read the article

  • Fast way to perform addition of 2 LARGE float arrays in Android. Optional JNI or Opengl ES

    - by nathan
    I simply need to add floatArray1 to floatArray2 storing the result in floatArray2.. no third array.. all arrays are one dimensional but are very large... probibly as large as the os will let me get away with. Max i would need is two float arrays with 40,000 floats each... but i could get away with 1/10th that i suppose minimum. Would love to do this in 1/30th or 1/60th of a second but that does not seem possible? Also if the code is JNI,NDK or OpenGL ES thats fine.. does android have an assembly language or like machine code i could use somehow?

    Read the article

  • How could I portably split large backup files over multiple discs?

    - by sourcejedi
    Context: I make backups / archives, primarily of photos. I'm experimenting with Bup, which is designed for backup to hard disk. Basically it creates Git repos which include packfiles of up to 1GB. But I still need last-ditch backups to keep offline and move offsite (and keeping them on read-only media is good too!). What are the options for archiving and splitting large files over several discs like CDs (and reading them back!)? I'd prefer methods which will stay readable in future. are portable e.g. to Windows. have known simple implementations, so I could re-implement them myself if necessary. (Using Bup packs will stretch my robustness budget. So I want to be confident about how other parts of the system would behave). I heard split archives are possible with both ZIP and 7-Zip. Is that right?

    Read the article

  • How can I send super large files directly to another computer in the Internet for free?

    - by Cruise
    I regulary need to transfer very large files (30 GB) to my friend - financial statistics. I don't have any problem with bandwidth: it is very broad here. I did some research in the area, so: 1. I would not use FTP, as it is very tricky to get it working behind a NAT. 2. I would not use Skype/MSN/ICQ, as it is not designed for file transfer and it underperforms on the huge files. 3. I would not use file-sharing services, as I need to pay for big files (30 GB is a problem here) and I don't like holding any piece of my data on the third-party server. So, I need some smart tool that will do what I need: sending files directly browser-to-browser and not browser-server-browser. Is it so complex? Is there some web application in the Internet that can do this?

    Read the article

  • In Vim, what is the best way to select, delete, or comment out large portions of multi-screen text?

    - by Edward Tanguay
    Selecting a large amount of text that extends over many screens in an IDE like Eclipse is fairly easy since you can use the mouse, but what is the best way to e.g. select and delete multiscreen blocks of text or write e.g. three large methods out to another file and then delete them for testing purposes in Vim when using it via putty/ssh where you cannot use the mouse? I can easily yank-to-the-end-of-line or yank-to-the-end-of-code-block but if the text extends over many screens, or has lots of blank lines in it, I feel like my hands are tied in Vim. Any solutions? And a related question: is there a way to somehow select 40 lines, and then comment them all out (with "#" or "//"), as is common in most IDEs?

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • Utility for notifying a user that their roaming profile is getting too large to copy before shutdown?

    - by leeand00
    My users are having an issue with their roaming profiles getting too large and then their roaming profile is lost. I believe this is because this is because they are storing too much in their roaming profiles. Is there a program that can be installed in Windows, that will: Listen for a logoff event Check the size of their Roaming Profile against a size limit I set... If the roaming profile is too big, it will notify the user that they have to decrease the size of the profile. Does a program like this exist or does it need to written?

    Read the article

  • How do I upload large (30MB) files via a web interface?

    - by Dan
    Because I'm stumped... The client needs to be able to upload large images to a library but the upload fails after 5-6MB (over my poor connection). It seems to be timing out as the filesize at fail isn't consistent. The setup is a form which is accepted by PHP. I've googled and played with php.ini and everything is set for big uploads and long timeouts. Platform is a dedicated windows server at GoDaddy. What's going wrong?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >