Search Results

Search found 886 results on 36 pages for 'no duplicates'.

Page 27/36 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • while(true) and loop-breaking - anti-pattern?

    - by KeithS
    Consider the following code: public void doSomething(int input) { while(true) { TransformInSomeWay(input); if(ProcessingComplete(input)) break; DoSomethingElseTo(input); } } Assume that this process involves a finite but input-dependent number of steps; the loop is designed to terminate on its own as a result of the algorithm, and is not designed to run indefinitely (until cancelled by an outside event). Because the test to see if the loop should end is in the middle of a logical set of steps, the while loop itself currently doesn't check anything meaningful; the check is instead performed at the "proper" place within the conceptual algorithm. I was told that this is bad code, because it is more bug-prone due to the ending condition not being checked by the loop structure. It's more difficult to figure out how you'd exit the loop, and could invite bugs as the breaking condition might be bypassed or omitted accidentally given future changes. Now, the code could be structured as follows: public void doSomething(int input) { TransformInSomeWay(input); while(!ProcessingComplete(input)) { DoSomethingElseTo(input); TransformInSomeWay(input); } } However, this duplicates a call to a method in code, violating DRY; if TransformInSomeWay were later replaced with some other method, both calls would have to be found and changed (and the fact that there are two may be less obvious in a more complex piece of code). You could also write it like: public void doSomething(int input) { var complete = false; while(!complete) { TransformInSomeWay(input); complete = ProcessingComplete(input); if(!complete) { DoSomethingElseTo(input); } } } ... but you now have a variable whose only purpose is to shift the condition-checking to the loop structure, and also has to be checked multiple times to provide the same behavior as the original logic. For my part, I say that given the algorithm this code implements in the real world, the original code is the most readable. If you were going through it yourself, this is the way you'd think about it, and so it would be intuitive to people familiar with the algorithm. So, which is "better"? is it better to give the responsibility of condition checking to the while loop by structuring the logic around the loop? Or is it better to structure the logic in a "natural" way as indicated by requirements or a conceptual description of the algorithm, even though that may mean bypassing the loop's built-in capabilities?

    Read the article

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • Backup Gmail using Mail.app and IMAP without redundancy

    - by Cawas
    I don't care for actually using mail app, I use mostly the gmail interface and mail app just for offline, for quickly reading and eventually replying. Everything is working fine, I think I've followed every guide out there... Here's a great one. But I could find nothing about avoiding redundancy. Well, I can manually do that either by using POP or by checking off most of my labels out of IMAP. But I do use a lot of labels and I often label messages with more than 1 label. And I want them on mail app. Is there anyway to make it keep just 1 copy of repeated messages? Maybe there's a message id or checksum that could be used... If there isn't a way to do it, be assured I still prefer having the extra messages and "wasting" space rather than not having any. edit: I've came across many solutions for finding duplicate files, but they just delete the files. That just make things worst: Mail will just sync it all again. I've realized it's probably better to keep two accounts setup, POP for backup and IMAP for everything else with removing the "All Mail" from it. That's because if the "All Mail" on the server is deleted for any reason, my "All Mail" local will also get deleted, while POP will keep all files regardless of the server. This doesn't solve the redundancy issue at all, but it doesn't create any new issue as well, and I can even use the search properly, without duplicated results, if I search just on the POP. So it helps optimizing a little bit. But I still think the best way to solve this issue would be having something such as aamann's Mail Scripts tweaked to hardlinking the duplicates rather than deleting, and optimized to not need to scan everything every time. I'm trying to contact him and see what we can do. At any pace, I'm still looking for an answer!

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • Postfix aliases and duplicate e-mails, how to fix?

    - by macke
    I have aliases set up in postfix, such as the following: [email protected]: [email protected], [email protected] ... When an email is sent to [email protected], and any of the recipients in that alias is cc:ed which is quite common (ie: "Reply all"), the e-mail is delivered in duplicates. For instance, if an e-mail is sent to [email protected] and [email protected] is cc:ed, it'll get delivered twice. According to the Postfix FAQ, this is by design as Postfix sends e-mail in parallel without expanding the groups, which makes it faster than sendmail. Now that's all fine and dandy, but is it possible to configure Postfix to actually remove duplicate recipients before sending the e-mail? I've found a lot of posts from people all over the net that has the same problem, but I have yet to find an answer. If this is not possible to do in Postfix, is it possible to do it somewhere on the way? I've tried educating my users, but it's rather futile I'm afraid... I'm running postfix on Mac OS X Server 10.6, amavis is set as content_filter and dovecot is set as mailbox_command. I've tried setting up procmail as a content_filter for smtp delivery (as per the suggestion below), but I can't seem to get it right. For various reasons, I can't replace the standard OS X configuration, meaning postfix, amavis and dovecot stay put. I can however add to it if I wish.

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • Does any economically-feasible publicly available software compare audio files to determine if they are dupes?

    - by drachenstern
    In the vein of this question http://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks is there any software that will automatically parse a library of my songs and find the ones that really are duplicates that one can be eliminated? Here's an example: My brother used to be a huge fan of remixing CDs. He would take all of his favorite tracks and put them on one. Then he would use my computer to read them in. So now I have like 6 copies of Californication on my HDD, and they're all a few bytes difference overall. I have hundreds of songs in my library like this. I want to trim them down to having uniques. They don't all have correct ID3 tags, so figuring out that Untitled(74).mp3 is the same as californication.mp3 is the same as whowrotethis.mp3 is tricky. I do NOT want to consider a concert album and a studio album rip to be the same (if I just did artist/title matching I would end up with this scenario, which doesn't work for me). I use Windows (pick your platform) and will be getting an OSX box later in the year. I'll run Linux if that's what it takes to get it organized. I have unprotected AAC and mp3 files. Bonus points for messing with WAV or MIDI and bonus points for converting from those into MP3 (I can always use Audacity and LAME to convert later if I know they match or to convert ahead of time if that will make things easier). Are there any suggestions, or do I need to goto Programmers or SO and build a list of requirements for comparing these things and write the software myself?

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Exchange 2010 Mail Enabled Public Folder Unable to Recieve External (anon) e-mail.

    - by Alex
    Hello All, I am having issues with my "Public Folders" mail enabled folders receiving e-mails from external senders. The folder is setup with three Accepted Domains (names changed for privacy reasons): 1 - domain1.com (primary & Authoritative) 2 - domain2.com (Authoritative) 3 - domain3.com (Authoritative) When someone attempts to send an e-mail to [email protected] from inside the organization, the e-mail is received and placed in the appropriate folder. However, when someone tries to send an e-mail from outside the organization (such as a gmail account), the following error message is received: "Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 Recipient address rejected: User unknown (state 14)." When I try to send an e-mail to the same folder, using the same e-mail address above ([email protected]), but with domain2.com instead of domain3.com, it works as intended (both internal & external). I have checked, double checked, and triple checked my DNS settings comparing those from domain2 & domain3 with them both appearing identical. I have tried recreating the folders in question with the same results. I have also ran Get-PublicFolderClientPermission "\Web Programs\folder" with the following results for user anonymous: RunspaceId : 5ff99653-a8c3-4619-8eeb-abc723dc908b Identity : \Web Programs\folder User : Anonymous AccessRights : {CreateItems} Domain2.com & Domain3.com are duplicates of each other, but only domain2.com works as intended. All other exchange functions are functioning properly. If anyone out there has any suggestions, I would love to hear them. I've just hit a brick wall. Thanks for all your help in advance! --Alex

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Deleting files in the windows installer folder

    - by qw3n
    How do you clean up the windows/installer folder on a xp machine. I looked on different forums, but the tool many mention is no longer officially supported and from what I understand not specifically for this task. Also, I was confused on which tool to use or how to use it. The reason I ask this is I have an older computer with ~86gb drive and ~80gb of is being used by the windows/installer. I'm assuming that at least some of these are glitches and shouldn't be in there. Note that the person who uses the computer mentioned trying to interrupt an install at some point and I don't know if this has anything to do with it. Also, there are not that many programs installed on this computer ~25. Also, I know that similar questions has been asked several times already, but the accepted answer Is it safe to delete from C:\Windows\Installer? is mainly talking about is it safe to delete (along with most of the duplicates). I'm asking how to find and delete the files that shouldn't be there especially since were not talking 5-10gb but something that practically fills the entire hard drive, and for those who are wondering I ran CCleaner, but it doesn't seem to check this folder.

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • download management

    - by Jonathan
    I download many files, usually 2 or 3 a day, often 10ish. Some of them are duplicates because I just can't be bothered to find the original in my downloads folder. I have previously tried DAP and used that to create a new subfolder for each day's download. yet I have found this insufficient as sometimes I wish to find files by name/file type or I have multiple parts of downloads over more than one day. Another problem I have found is zips/rars/etc after downloading them and extracting them I then have the zip and the folder. I like it like on a Mac where it automatically extracts the zip after it has been downloaded and removes the zip. What I'd like to be able to do is sort the downloads by date, but dynamically so they are just in the big downloads folder, but I can just press a button and it will show me all the files from a particular site, or from a particular day or by a certain file type. Is there any software that will do this? I use Chrome as a browser but also have Firefox and like that. Jonathan

    Read the article

  • internet-based sync software that will keep running after Windows Live Sync stops doing PC-to-PC-syncs?

    - by Warren P
    According to the wikipedia page, Microsoft Live Sync will shortly stop offering the PC-to-PC sync service. There are lots of apps to sync two PCs on the same LAN, but I want to sync two PCs that are in different cities, across the internet, traversing two different NATs, and that requires some kind of service running in the internet that both connect into. There is already a few questions about syncing folders and files, but this is not a duplicate because none of them answer this basic question: Microsoft Live Sync works better than RSYNC, or any of the linked SYNC solutions in any of the "not really duplicates" because it works even when the two PCs have NAT and firewalls between them that forbid direct connectivity, because Windows Live Sync has a free always-on internet server that all the client PCs connect into. I'm looking for a FREE (no-fees) Microsoft Live Sync work-alike PC-to-PC sync solution that works between PCs and Macs, at least, as well as between PCs, and works behind NAT and firewalls at least as well as Microsoft's solution. (Note that Microsoft's solution makes only outbound socket calls to a microsoft server, so this solution must necessarily include a server-hub component that is hosted publically on a free site and which does not require that I set up and manage and pay for my own public internet hosting site) Hint: None of the answers in the linked duplicate are equivalent (PureSync,FreeFileSync,BestSync 2010,SyncButler,Comodo BackUp,QuickShadow,Gbridge) in that none of them work for the PC to Mac situation, where firewalls and nats prevent direct connection, or else they require money to be paid. When Microsoft Live Sync / Live Mesh finally kills direct PC-to-PC mode, the limitation will be that you will have to pay for more than 25 GB of cloud service, and you can then only sync PC #1 to PC #2 if you first sync to the cloud, then down to other clients. I can currently sync 100 gb of data from one computer to another, only temporarily "moving the data" through Microsoft's data servers without using up my Skydrive storage quota.

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • Drobo-like linux file server - how do I do it?

    - by John Hunt
    I've been pondering for a long time about how I can set up a server which operates much like the Drobo storage thing. The reasons I don't actually want a drobo is because I've heard scare stories, plus I'd like to do this on the cheap. So ideally I'm looking for something like lvm so I can create a logical volume that spans many hard disks of varying sizes... obviously that only offers redundancy if I put the LV on a raid array (as far as I know..) I have however been reading about technologies such as Microsoft's drive extender which duplicates files at the filesystem level and makes sure that the mirrored files are on a different phyiscal disk.. does anyone know or recommend a filesystem or method like this as it'll hopefully make much better use of the space available than raid ever could. Performance isn't an issue, I'd just really like to make the most of the hard disks I have lying around whilst having a bit of redundancy incase a disk dies. I understand full well that this is no replacement for a backup, but I'll only be storing files of medium importance and using the nas itself as a backup of my main pc and other systems. Thanks in advance! I'm hoping zfs or btrfs or something can do something clever for me :)

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • Excel fails to open Python-generated CSV files

    - by johnjdc
    I have many Python scripts that output CSV files. It is occasionally convenient to open these files in Excel. After installing OS X Mavericks, Excel no longer opens these files properly: Excel doesn't parse the files and it duplicates the rows of the file until it runs out of memory. Specifically, when Excel attempts to open the file, a prompt appears that reads: "File not loaded completely." Example of code I'm using to generate the CSV files: import csv with open('csv_test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow([1,2,3]) writer.writerow([4,5,6]) Even the simple file generated by the above code fails to load properly in Excel. However, if I open the CSV file in a text editor and copy/paste the text into Excel, parse it with text to columns, and then save as CSV from Excel, then I can reopen the CSV file in Excel without issue. Do I need to pass an additional parameter in my scripts to make Excel parse the CSV files the same way it used to? Or is there some setting I can change in OS X Mavericks or Excel? Thanks.

    Read the article

  • AviSynth ChangeFPS: combining videos with different framerates

    - by Daniel Saner
    I have two video recordings of the same scene, but with different framerates, that I would like to combine using an AviSynth script. One video is recorded at 30fps, the other at 120fps. What I would like to do is to keep them temporally synchronised, meaning that for each frame of the 30fps video, the output should display 4 frames from the 120fps video. I would like the final output video to play at 30fps so that the duration is 4 times the original recordings. From AviSynth's documentation, it seems ChangeFPS is the function I'll need, since it removes and duplicates frames, while 'AssumeFPS' just changes the playback speed (and my plan is basically to quadruple every frame of the 30fps clip). However, the filter does not seem to do what it says. If I try: clip30 = AviSource("0326.avi").ChangeFPS(120) clip120 = AviSource("0326-120fps.avi") it doesn't affect the playback speed or frame count of the 30fps clip at all, but removes every fourth frame from the 120fps clip, which is not at all what I want. Unfortunately, appending .ChangeFPS(7.5) to the clip120 instead does not have the same inverse effect—in that case, it does exactly what's to be expected. Alternatively, if I try: clip30 = AviSource("0326.avi").AssumeFPS(7.5) clip120 = AviSource("0326-120fps.avi") there is no effect at all, both clips are played back at 30fps, meaning that only a quarter of the 120fps clip has been shown by the time the 30fps clip is over. So how can I combine these two clips in the manner I want? I was unable to find any other internal or external filters that would help me do that. It seems to me that if ChangeFPS did what the manual says, it would be the right one for the job.

    Read the article

  • Is there a screen sharing/remote desktop app for mac that lets you use a different host screen resolution?

    - by MarqueIV
    Ok, there are tons and tons of questions about remote desktop for mac and they're all being closed as duplicates. I however am specifically looking for one that will let me use a different resolution than the host, the way you can with Remote Desktop for Windows. For instance, when I connect to my 11" Macbook Air booted into Windows7 from my quad-screen desktop, also booted into Win7 using Microsoft's Remote Desktop Client, it blanks out the screen on the notebook, then virtualizes the video across all four of my desktop's monitors at their native resolutions (2560x1600, 2 x 1920x1200 and 1600x1200) and the notebook now acts as if it has four physical monitors connected to it. All of this from a notebook that only has a 1366 x 768 native resolution. Even when running OS X on the client running RDC, while it doesn't support multi-monitors like its Win counterpart, it still lets me run at the native resolution of the client screen of 2560x1600. Again, it just blanks out the host screen while doing so. However when using Mac's screen sharing, since that is just glorified VNC, it just mirrors what's already on the host's screen, meaning it will always be a single screen with the resolution of 1366x768. This of course makes sense since VNC is a mirroring solution, not a video-virtualizing one like RDC, but it means that on my quad-monitor setup, the remote window isn't even large enough to fill up a single monitor, let alone four (unless you have a client that can scale it up, but that's video scaling. It's still only 1366x768.) So what I'm looking for is if there is a solution on the Mac that lets me do the same thing as RDC in a Win environment. Don't care if I have to pay. I'd gladly pay several hundred dollars for this. I just need that specific feature. Note: People have suggested various VNC clients, but the VNC host still runs at 1366x768 so that will not work here. Ever. Also, people have suggested Synergy/Synergy+/Teleport and such which share the keyboard and mouse, not video. Completely different animal unrelated to what I'm looking for.

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Internal Code Signing: Key Distribution, or Certificate Server?

    - by Myrddin Emrys
    I should first note that we have nobody in IT with significant familiarity with self-signed certification. We have a moderately sprawling network (one forest, many locations), and we are now rolling out internal code signing; until now users have run untrusted code, or we even disabled(!) the warnings. Intranet applications, scripts, and sites will now be signed with self certification. I am aware of two obvious ways we can deploy this: Distributing the keys directly via a group policy, and setting up a cert server. Can someone explain the trade-offs between these two methods? How many certs before the group policy method is unwieldy? Are they large enough that remote users will have issues? Does the group policy method distribute duplicates on every login? Is there a better method I am not aware of? I can find a lot of documentation on certifications and various ways to create them, but I have not been able to find something that summarizes the difference between the distribution methods and what criteria make one or the other superior.

    Read the article

  • Good process/software for organizing photos past/present

    - by Matthew
    So I have tons of photos taken all the time. I have a lot from years past that I never went through (meaning deleting duplicates, etc). I've got a new pc with windows 7, and I'm wondering what a good process is to organize those photos. They're in folders that have really no meaning (it used to be people would put them in a folder wherever, even the desktop or somewhere else, not just the My Pictures folder). I'm going to keep all pictures in the "My Pictures" folder from now on. I've used Picasa from g=Google, and it works great. Is this the recommended free software for this? What process do I use to move the old pictures over in to new "organized" folders? Lately in Picasa when I import off my camera card, I would just select something that names the folder after the date it was taken. Is this advised? Just give me ideas on how to stay organized with photos. Should I tag them also? Should I rename the file names? Keep in mind I have over 16,000 photos I'll have to go through, so it can't be anything to thorough.

    Read the article

  • Permissions in OS X for iTunes library with multiple users

    - by John
    I currently have a lot of music on an external drive and my iTunes set up from there. However, periodically, when the external drive isn't connected, iTunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my Mac's HD is large enough to house the music collection. However, I have 4 family members – all with their own logins – using this same gob of music. I don't want four copies of the library, only one with all libraries referencing it. So, what I want to do is: Move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. Get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let iTunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the iTunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any XML library file editing if possible. Am I dreaming?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >