Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 42/869 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Sending a large number of mails causing problems on CentOS 6 / Plesk 10

    - by papakost
    I have a VPS running CentOS 6. When the system tries to send daily newsletter, after some time (e.g. after sending about 2000 emails), I get error "Unable to send mail" and the system memory goes really high. Till this moment, the mails are delivered normally. The rest symptoms are: I cannot see anything on /var/log/maillog (File seems not to be written) All files on /var/spool/mail have 0 bytes size. From time to time on httpd log I get errors like: /usr/sbin/sendmail: error while loading shared libraries: libc.so.6: cannot open shared object file: Error 23 "Activate mail service on domain" setting in Plesk is deactivated. Any idea on what's going wrong here?

    Read the article

  • Uploading many large files to a remote server

    - by TiernanO
    I am in the process of creating an offsite backup, and need to do a initial load of data. Currently, that's about 400Gb, give or take 10Gb or so... The backup system is producing files which are about 4Gb each, and has some other, smaller related files also. So, i need to transfer all 400ish gigs to a remote server, but how? What is the best method? I have full remote access to the server, so i can install anything i need to install. There are Windows, Linux and a Solaris VM running on the box itself, so any of those can be used there, and i have Windows and Linux at home. I have 2 internet connections in house, 10Mb/s uploading on each, so something that could potentially split the number of connections would be handy (kind of like GetRight, but in reverse... PutRight?).

    Read the article

  • large RAID 10 vs small RAID1

    - by user116399
    The machine will store and serve millions of small files (<15Kb each), and all those files require a total storage space of 400G Considering the exact same SATA hard drives maker and models, on the exact same environment (OS, cpu, ram, raid controller, etc...) which one of the setups bellow would be faster? A) RAID 1 with 2 drives of 2T each, making up total storage of 2T B) RAID 10 with 4 drives of 2T each, making up total storage of 4T [EDIT]: I'm aware RAID10 is faster than RAID1. The larger the disk, at least in theory, the longer will take to do seeks/writes. So, will the performance gain of RAID10 will be outweighed by the "drag" caused the larger disk area when seek/write operations happened?

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • I can't open a Word file because it's too large

    - by Jane
    I was creating a file with MS Word 2007 where I included a number of images. I didn't compress them as I was putting them into the file. I managed to save the file, but have not been able to reopen it ever since, as it says that I have exceeded the 32 MB limit. I am working on an old Macbook (OS X 10.4.11). I have tried to open the file in both OpenOffice and LibreOffice, but it just causes those programs to crash. Is there any way of reducing the file size without opening the document?

    Read the article

  • IIS large amount of time before loading

    - by Lukes123
    I am running an ASP.NET 3.5 website on IIS 6 with Server 2003. Whenever I modify any of the ASPX files, any page on the site then takes about 2-3 minutes before it starts to load. Even the smallest modification causes this to happen. Why is this?

    Read the article

  • Streaming a large file

    - by Rich
    quick question say i wanted to download a file of considerable size 10gb say and i sent the GET request to a web server to download that file question is, if the client stopped reading the TCP connection, would the entire file still be downloaded, or does it depend on the client sending back an ack or something to the circuit asking for the next packet hope that makes sense This question was originally asked involving the Tor network, but i just want to know how a standard internet connection would handle this Thanks

    Read the article

  • Best way to copy large amount of data between partitions

    - by skinp
    I'm looking to transfer data across 2 lv of an HP-UX server. I have a couple of those transfers to do, some of which are mostly binary (Oracle tablespace...) and some others are more text files (logs...). Used data size of the volumes is between 100Gb and 1Tb. Also, I will be changing the block size from 1K to 8K on some of these partitions... Things I'm looking for: Guarantees data integrity Fastest data transfer speed Keeps file ownership and permissions Right now, I've thought about dd, cp and rsync, but I'm not sure on the best one to use and the best way to use them...

    Read the article

  • Default profile for large

    - by user63434
    Hi I am setting up a master image to clone to all same machine type Windows 7 client, I login as administrastor and installed all the programs and changed the desktop settings etc, but my local administrator profile is 244megs in size, which will become the default profile of the local machine when sysprep, we have a 2003 server that I want to use mandatory profile for all login users which means I need to copy this profile to the server so when any users login to the domain they are using this profile, loading a 244megs profile is going to be very slow since it will be removed from the client when they logoff. So next time they login it will take a long time again. Is there anything I can do, can I just copy just the bare minimum files from the default profile to the server, as I am not sure what parts I need, I read that I must copy my documents, my documents/pictures so the folder redirection will work. What else do I need to copy to the server? I have firefox xmark sync also and MS words etc. THanks

    Read the article

  • SVN Checkout error on large repositories

    - by Brian Mitchell
    I wonder if anyone can help me. We have recently migrated our Subversion repository from a VisualSVN Server on Windows to a subversion server on CentOS. The migration was succesfull however we are getting the following error message Error REPORT of svn'/svn/MangoRepository/!svn/vcc/default': Could not read chunk size: Error connection was closed by server (http://servername) Now the workaround for this is simply to perform a update on the repo and it will contine where is left off. Im just wondering if anyone was a permanent fix for this as it can be quite frustrating to repeat my self to 60-70 developers.

    Read the article

  • /var/run/utmp is getting large and slowing my server down

    - by Travis
    I removed it and touched an empty verison a few weeks ago and noticed a big upswing in performance for my server. The file was 400+ MB. I've been keeping an eye on it since and I'm noticing it is growing fairly quickly. I tailed the file and I'm seeing a lot of "TTYXXLOGIN" entries. Should I be concerned? Is there a way to minimize it's logging? Should I logrotate it and forget about it? Thanks in advance.

    Read the article

  • NOTEPAD++ Need macro or typeitin for automation of large lists

    - by user2526699
    I'm sure there is a way to do this but I can not seem to figure it out. I will try my best to explain this. I have a list with 20,000 lines in notepad++. I have two tabs open in notepad++. The right side tab is the main list. The left side tab is what needs to be added to the beginning of each line in the right tab. Here is an image of my notepad++ to give you a better understanding. I need to be able to do the following in an automated way as I have over 20,000 lines to do this way. copy line 1 of tab 'new 7' switch to tab 'new 6' paste clipboard(line 1 of tab 'new 7') at beginning of line 1 tab 'new 6' switch back to tab 'new 7' copy line 2 of tab 'new 7' switch to tab 'new 6' paste clipboard(line 2 of tab 'new 7') at beginning of line 2 tab 'new 6' I have both pasteitin and typeitin download but if i need some other program/app or if it's built in to notepad++ that would be great. I need to do this by the program itself or for me to only have to press a button to do each of these.

    Read the article

  • Large recovery partitions

    - by Unsigned
    Is there any good reason as to why factory restore partitions are generally much larger than they need to be? Examples I have found in my own experience: Dell XPS laptop Partition: 13.67 GB Used: 6.68 GB Dell Inspiron laptop Partition: 14.7 GB Used: 7.2 GB Toshiba laptop Partition: 15.3 GB Used: 9 GB In all cases, shrinking the partition to only slightly more than the Used space had no ill effects on future factory restorations. Why the exorbitant amount of extra space, given that neither of the three computers ever writes any data to the recovery partition? Is there a good reason I'm overlooking?

    Read the article

  • Suggestion on algorithm to distribute objects of different value

    - by Unknown
    Hello, I have the following problem: Given N objects of different values (N < 30, and the values are multiple of a "k" constant, i.e. k, 2k, 3k, 4k, 6k, 8k, 12k, 16k, 24k and 32k), I need an algorithm that will distribute all items to M players (M <= 6) in such a way that the total value of the objects each player gets is as even as possible (in other words, I want to distribute all objects to all players in the fairest way possible). I don't need (pseudo)code to solve this (also, this is not a homework :) ), but I'll appreciate any ideas or links to algorithms that could solve this. Thanks!

    Read the article

  • Problem with fetching dictionary objects in array from plist iphone sdk

    - by neha
    Hi all, What is the datatype you use to fetch items whose type is dictionary in plist i.e. nsmutabledictionary or nsdictionary? Because I'm using following code to retrieve dictionary objects from an array of dictionaries in plist. NSMutableDictionary *_myDict = [contentArray objectAtIndex:0]; NSLog(@"MYDICT : %@",_myDict); NSString *myKey = (NSString *)[_myDict valueForKey:@"Contents"] ; [[cell lblFeed] setText:[NSString stringWithFormat:@"%@",myKey]]; Here, on first line it's showing me objc_msgsend. ContentArray is an nsarray and it's contents are showing 2 objects that are there in plist. In plist they are dictionary objects. Then why this error? Can anybody please help? Thanx in advance.

    Read the article

  • Cocoa - Enumerate mutable array, removing objects

    - by Ward
    Hey there, I have a mutable array that contains mutable dictionaries with strings for the keys latitude, longitude and id. Some of the latitude and longitude values are the same and I want to remove the duplicates from the array so I only have one object per location. I can enumerate my array and using a second enumeration review each object to find objects that have different ids, but the same latitude and longitude, but if I try to remove the object, I'm muting the array during enumeration. Is there any way to remove objects from an array while enumerating so I only enumerate the current set of objects as the array is updated? Hope this question makes sense. Thanks, Howie

    Read the article

  • How to count differences between two files on linux?

    - by Zsolt Botykai
    Hi all, I need to work with large files and must find differences between two. And I don't need the different bits, but the number of differences. For the differ rows I come up with diff --suppress-common-lines --speed-large-files -y File1 File2 | wc -l And it works, but is there a better way to do it? And how to count the exact number of differences (with standard tools like bash, diff, awk, sed some old version of perl)? Thanks in advance

    Read the article

  • Can someone provide an example of seeking, reading, and writing a >4GB file using boost iostreams

    - by Queueless
    I have read that boost iostreams supposedly supports 64 bit access to large files semi-portable way. Their FAQ mentions 64 bit offset functions, but there is no examples on how to use them. Has anyone used this library for handling large files? A simple example of opening two files, seeking to their middles, and copying one to the other would be very helpful. Thanks.

    Read the article

  • Flex AdvancedDataGrid with Grouping, how do I get objects to appear under first GroupingField if the

    - by shadenite
    I am using an AdvancedDataGrid with two GroupingFields. The dataProvider has a list of objects with these two field values, but occasionally the second field value can be null. When it loads, the AdvancedDataGrid UI has a root folder (first GroupingField) and some additional subfolders (second GroupingField). This is all good. However, the objects with a null value for the second GroupingField, just get placed in a subfolder with no label. I want the objects with a null second GroupingField value to appear as leaf nodes beneath the root folder (first GroupingField) minus the blank subfolder. A good way to picture this would be a file explorer. Is there a good way to do this? Make the folder icon disappear maybe after expanding this node through actionscript? ParentFolder SubFolder Leaf Object Leaf Object SubFolder Leaf Object Leaf Object Leaf Object

    Read the article

  • How to remove objects from an Enumerable collection in a loop

    - by johnc
    Duplicate Modifying A Collection While Iterating Through It Has anyone a nice pattern to allow me to get around the inability to remove objects while I loop through an enumerable collection (eg, an IList or KeyValuePairs in a dictionary) For example, the following fails, as it modifies the List being enumerated over during the foreach foreach (MyObject myObject in MyListOfMyObjects) { if (condition) MyListOfMyObjects.Remove(myObject); } In the past I have used two methods. I have replaced the foreach with a reversed for loop (so as not to change the any indexes I am looping over if I remove an object). I have also tried storing a new collection of objects to remove within to loop, then looping through that collection and removed the objects from the original collection. These work fine, but neither feels nice, and I was wondering if anyone has come up with a more elegant solution to the issue

    Read the article

  • May a NSManagedObjectContext re-fault objects automatically?

    - by frenetisch applaudierend
    I am trying to create an application which allows background threads to update core data objects while the user might be reading the same data. My approach to this would be to use multiple NSManagedObjectContexts and then before a background thread does a -save: operation, I fetch the object the user is currently working on and fire the fault for all its properties and relationships recursively. This way I have all objects the user could act with in my NSManagedObjectContext without seeing the already updated values. But this can only work if the NSManagedObjectContext cannot decide himself that e.g. memory usage is too high, and starts faulting objects which I do not explicitly reference (other than through the NSManagedObject relationship). So the question is, can the NSManagedObjectContext decide that an object needs to be re-faulted, without intervention from my side? Thanks for your effort, Markus

    Read the article

  • django models objects filter

    - by ha22109
    Hello All, I have a model 'Test' ,in which i have 2 foreignkey models.py class Test(models.Model): id =models.Autofield(primary_key=True) name=models.ForeignKey(model2) login=models.ForeignKey(model1) status=models.CharField(max_length=200) class model1(models.Model): id=models.CharField(primary_key=True) . . is_active=models.IntergerField() class model2(model.Model): id=models.ForeignKey(model1) . . status=model.CharField(max_length=200) When i add object in model 'Test' , if i select certain login then only the objects related to that objects(model2) should be shown in field 'name'.How can i achieve this.THis will be runtime as if i change the login field value the objects in name should also change.

    Read the article

  • Proper reconstitution of Aggregate objects in the Repository?

    - by Jebb
    Assuming that no ORM (e.g. Doctrine) is used inside the Repository, my question is what is the proper way of instantiating the Aggregate objects? Is it instantiating the child objects directly inside the Repository and just assign it to the Aggregate Root through its setters or the Aggregate Root is responsible of constructing its child entities/objects? Example 1: class UserRepository { // Create user domain entity. $user = new User(); $user->setName('Juan'); // Create child object orders entity. $orders = new Orders($orders); $user->setOrders($orders); } Example 2: class UserRepository { // Create user domain entity. $user = new User(); $user->setName('Juan'); // Get orders. $orders = $ordersDao->findByUser(1); $user->setOrders($orders); } whereas in example 2, instantiation of orders are taken care inside the user entity.

    Read the article

  • JavaScript check if anonymous object has a method

    - by Baddie
    How can I check if an anonymous object that was created as such: var myObj = { prop1: 'no', prop2: function () { return false; } } does indeed have a prop2 defined? prop2 will always be defined as a function, but for some objects it is not required and will not be defined. I tried what was suggested here: http://stackoverflow.com/questions/595766/how-to-determine-if-native-javascript-object-has-a-property-method but I don't think it works for anonymous objects .

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >