Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 14/105 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Show the progress of commiting

    - by Vepr
    I need to show a process of commiting my files to the svn. I use vs2008 C#.I take a progress bar, but when I am starting commiting, I don't know how to show my progress(also I want to look what file is uploading now in a label for example)

    Read the article

  • What information should a SVN/Versioned file commit comment contain?

    - by RenderIn
    I'm curious what kind of content should be in a versioned file commit comment. Should it describe generally what changed (e.g. "The widget screen was changed to display only active widgets") or should it be more specific (e.g. "A new condition was added to the where clause of the fetchWidget query to retrieve only active widgets by default") How atomic should a single commit be? Just the file containing the updated query in a single commit (e.g. "Updated the widget screen to display only active widgets by default"), or should that and several other changes + interface changes to a screen share the same commit with a more general description like ("Updated the widget screen: A) display only active widgets by default B) added button to toggle showing inactive widgets") I see subversion commit comments being used very differently and was wondering what others have had success with. Some comments are as brief as "updated files", while others are many paragraphs long, and others are formatted in a way that they can be queried and associated with some external system such as JIRA. I used to be extremely descriptive of the reason for the change as well as the specific technical changes. Lately I've been scaling back and just giving a general "This is what I changed on this page" kind of comment.

    Read the article

  • Commit in SVN without letting other to checkout - is it possible?

    - by strDisplayName
    Hey everybody, I am using SVN as my source control with ANKH and TortoiseSVN. I am writing a huge change in a project, and it takes a few days to make the change, but in the meantime I still want to commit once in a while for backup. But if I commit other team members will get updated with my unfinished work. Is there a way that I can "commit for backup" without changing the revision (so other won't get updated with my changes)? Thanks!

    Read the article

  • How to concatenate all commit messages from subversion into one text file with no metadata?

    - by user144182
    I would like to take all the commit messages in my subversion log and just concatenate them into one text file. Each commit message has this format: - r1 message - r1 message - r1 message What I would like is something like: - r1 message - r1 message - r2 message - r2 message - r3 message [...] - r1000 message Update I thought the above was clear, but what I don't want in the log is this type of info: r2130 | user| 2010-03-19 10:36:13 -0400 (Fri, 19 Mar 2010) | 1 line No meta data, I simply want the commit messages.

    Read the article

  • What happens if a file I want to commit to SVN is updated so often I don't manage to do a merge quic

    - by sharptooth
    Consider a situation. I want to commit a changed file to SVN and see that someone else committed the same file after I checked it out, so I have to "update" and merge changes. While I'm doing that someone commits the same file again, so when I try to commit the merged file I have to update again. Now if other users commit often enough it looks like I will never be able to commit my changes. Is that really so? How is this problem solved in real development environments?

    Read the article

  • Is it good to commit files often if using Mercurial or Git?

    - by Jian Lin
    It seems that it is suggested we can commit often to keep track of intermediate changes of code we wrote… such as on hginit.com, when using Mercurial or Git. However, let's say if we work on a project, and we commit files often. Now for one reason or another, the manager wants part of the feature to go out, so we need to do a push, but I heard that on Mercurial or Git, there is no way to push individual files or a folder… either everything committed gets pushed or nothing get pushed. So we either have to revert all those files we don't want to push, or we just never should commit until before we push -- right after commit, we push?

    Read the article

  • MacBook Pro Late 2009 SATA Resets, Slowness (Is my motherboard dying on both machines?)

    - by A Student at a University
    My MacBook Pro runs slower the longer it's on. I am getting kernel warnings. Some, but not all, resets correlate with AC power connects and disconnects. I don't think the warnings do. (How do I tell?) What are these errors? What causes them? Can this damage the drive or corrupt data? What is it seeing that motivates these? 02:37:16[ 0.791992] ahci 0000:00:0b.0: PCI INT A -> Link[LSI0] -> GSI 20 (level, low) -> IRQ 20 02:37:16[ 0.792047] ahci 0000:00:0b.0: irq 43 for MSI/MSI-X 02:37:16[ 0.792053] ahci 0000:00:0b.0: controller can't do PMP, turning off CAP_PMP 02:37:16[ 0.792104] ahci 0000:00:0b.0: AHCI 0001.0200 32 slots 6 ports 1.5 Gbps 0x3 impl IDE mode 02:37:16[ 0.792107] ahci 0000:00:0b.0: flags: 64bit ncq sntf pm led pio slum part boh 02:37:16[ 0.792111] ahci 0000:00:0b.0: setting latency timer to 64 02:37:16[ 0.813473] scsi0 : ahci 02:37:16[ 0.823340] scsi1 : ahci 02:37:16[ 0.848164] ata1: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484100 irq 43 02:37:16[ 0.848166] ata2: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484180 irq 43 02:37:16[ 1.190132] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.190153] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.213568] ata1.00: ATA-8: OCZ-VERTEX2, 1.23, max UDMA/133 02:37:16[ 1.213572] ata1.00: 195371568 sectors, multi 1: LBA48 NCQ (depth 31/32) 02:37:16[ 1.227293] ata2.00: ATA-8: ST9500420ASG, 0002SDM1, max UDMA/133 02:37:16[ 1.227297] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32) 02:37:16[ 1.229570] ata2.00: configured for UDMA/133 02:37:16[ 1.240120] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:37:16[ 1.240123] ata2: irq_stat 0x00000040, connection status changed 02:37:16[ 1.240127] ata2: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:37:16[ 1.240133] ata2: hard resetting link 02:37:16[ 1.260738] ata1.00: configured for UDMA/133 02:37:16[ 1.280111] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:37:16[ 1.280114] ata1: irq_stat 0x00000040, connection status changed 02:37:16[ 1.280118] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:37:16[ 1.280122] ata1: hard resetting link 02:37:16[ 1.990101] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.994215] ata2.00: configured for UDMA/133 02:37:16[ 1.994220] ata2: EH complete 02:37:16[ 2.030097] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 2.090773] ata1.00: configured for UDMA/133 02:37:16[ 2.090778] ata1: EH complete 02:37:16[ 2.090931] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX2 1.23 PQ: 0 ANSI: 5 02:37:16[ 2.091045] sd 0:0:0:0: Attached scsi generic sg0 type 0 02:37:16[ 2.091121] sd 0:0:0:0: [sda] 195371568 512-byte logical blocks: (100 GB/93.1 GiB) 02:37:16[ 2.091159] scsi 1:0:0:0: Direct-Access ATA ST9500420ASG 0002 PQ: 0 ANSI: 5 02:37:16[ 2.091163] sd 0:0:0:0: [sda] Write Protect is off 02:37:16[ 2.091165] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 02:37:16[ 2.091183] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16[ 2.091252] sd 1:0:0:0: Attached scsi generic sg1 type 0 02:37:16[ 2.091446] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) 02:37:16[ 2.091580] sd 1:0:0:0: [sdb] Write Protect is off 02:37:16[ 2.091582] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 02:37:16[ 2.091637] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16[ 2.093140] sd 0:0:0:0: [sda] Attached SCSI disk 02:37:16[ 2.093773] sd 1:0:0:0: [sdb] Attached SCSI disk 02:37:16[ 2.693899] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) 02:37:16[ 5.483492] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro 02:37:16[ 7.905040] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) 02:37:25[ 19.553095] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:25[ 19.555266] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:25[ 19.641532] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:25[ 19.641532] ata1: irq_stat 0x00000040, connection status changed 02:37:25[ 19.641532] ata1: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:25[ 19.641533] ata1: hard resetting link 02:37:25[ 19.642076] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:25[ 19.642078] ata2: irq_stat 0x00000040, connection status changed 02:37:25[ 19.642081] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:25[ 19.642084] ata2: hard resetting link 02:37:26[ 20.392606] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26[ 20.392610] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26[ 20.396697] ata2.00: configured for UDMA/133 02:37:26[ 20.396703] ata2: EH complete 02:37:26[ 20.451491] ata1.00: configured for UDMA/133 02:37:26[ 20.451498] ata1: EH complete 02:37:30[ 24.563725] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:30[ 24.565939] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:30[ 24.627236] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:37:30[ 24.627240] ata1: irq_stat 0x00000040, connection status changed 02:37:30[ 24.627242] ata1: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:37:30[ 24.627246] ata1: hard resetting link 02:37:30[ 24.632241] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:30[ 24.632244] ata2: irq_stat 0x00000040, connection status changed 02:37:30[ 24.632247] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:30[ 24.632250] ata2: hard resetting link 02:37:31[ 25.372582] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31[ 25.382615] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31[ 25.386782] ata2.00: configured for UDMA/133 02:37:31[ 25.386788] ata2: EH complete 02:37:31[ 25.431668] ata1.00: configured for UDMA/133 02:37:31[ 25.431674] ata1: EH complete 02:45:54[ 529.141844] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:55[ 529.544529] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:55[ 529.622561] ata1: limiting SATA link speed to 1.5 Gbps 02:45:55[ 529.622568] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:45:55[ 529.622572] ata1: irq_stat 0x00400040, connection status changed 02:45:55[ 529.622576] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:45:55[ 529.622583] ata1: hard resetting link 02:45:55[ 529.622609] ata2: limiting SATA link speed to 1.5 Gbps 02:45:55[ 529.622613] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:45:55[ 529.622616] ata2: irq_stat 0x00000040, connection status changed 02:45:55[ 529.622620] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:55[ 529.622624] ata2: hard resetting link 02:45:56[ 530.380135] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56[ 530.380157] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56[ 530.384305] ata2.00: configured for UDMA/133 02:45:56[ 530.384314] ata2: EH complete 02:45:56[ 530.399225] ata1.00: configured for UDMA/133 02:45:56[ 530.399233] ata1: EH complete 02:45:58[ 532.395990] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:45:58[ 532.518270] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:45:58[ 532.590968] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5840000 action 0xe frozen t4 02:45:58[ 532.590973] ata1: irq_stat 0x00000040, connection status changed 02:45:58[ 532.590977] ata1: SError: { CommWake LinkSeq TrStaTrns DevExch } 02:45:58[ 532.590983] ata1: hard resetting link 02:45:58[ 532.591034] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:45:58[ 532.591037] ata2: irq_stat 0x00000040, connection status changed 02:45:58[ 532.591041] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:58[ 532.591045] ata2: hard resetting link 02:45:59[ 533.340147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59[ 533.340168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59[ 533.344416] ata2.00: configured for UDMA/133 02:45:59[ 533.344424] ata2: EH complete 02:45:59[ 533.360839] ata1.00: configured for UDMA/133 02:45:59[ 533.360847] ata1: EH complete 02:45:59[ 533.584449] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:59[ 533.586999] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:59[ 533.660117] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:45:59[ 533.660122] ata2: irq_stat 0x00000040, connection status changed 02:45:59[ 533.660126] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:59[ 533.660132] ata2: hard resetting link 02:45:59[ 533.660141] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:45:59[ 533.660143] ata1: irq_stat 0x00000040, connection status changed 02:45:59[ 533.660147] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:45:59[ 533.660151] ata1: hard resetting link 02:46:00[ 534.412536] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00[ 534.412562] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00[ 534.416768] ata2.00: configured for UDMA/133 02:46:00[ 534.416777] ata2: EH complete 02:46:00[ 534.431396] ata1.00: configured for UDMA/133 02:46:00[ 534.431401] ata1: EH complete 02:46:03[ 537.384649] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:03[ 537.504214] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:03[ 537.585992] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:46:03[ 537.585996] ata1: irq_stat 0x00000040, connection status changed 02:46:03[ 537.585999] ata1: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:46:03[ 537.586002] ata1: hard resetting link 02:46:03[ 537.586028] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:46:03[ 537.586030] ata2: irq_stat 0x00000040, connection status changed 02:46:03[ 537.586033] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:46:03[ 537.586036] ata2: hard resetting link 02:46:04[ 538.330147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04[ 538.330168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04[ 538.334389] ata2.00: configured for UDMA/133 02:46:04[ 538.334398] ata2: EH complete 02:46:04[ 538.343511] ata1.00: configured for UDMA/133 02:46:04[ 538.343519] ata1: EH complete 02:46:04[ 538.456413] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:04[ 538.459404] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:04[ 538.540138] ata1.00: limiting speed to UDMA/100:PIO4 02:46:04[ 538.540144] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:46:04[ 538.540148] ata1: irq_stat 0x00000040, connection status changed 02:46:04[ 538.540153] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:46:04[ 538.540159] ata1: hard resetting link 02:46:04[ 538.540202] ata2.00: limiting speed to UDMA/100:PIO4 02:46:04[ 538.540207] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:46:04[ 538.540211] ata2: irq_stat 0x00000040, connection status changed 02:46:04[ 538.540215] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:46:04[ 538.540220] ata2: hard resetting link 02:46:05[ 539.290054] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05[ 539.290041] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05[ 539.294100] ata2.00: configured for UDMA/100 02:46:05[ 539.294106] ata2: EH complete 02:46:05[ 539.314125] ata1.00: configured for UDMA/100 02:46:05[ 539.314132] ------------[ cut here ]------------ 02:46:05[ 539.314140] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:05[ 539.314144] Hardware name: MacBookPro5,3 02:46:05[ 539.314146] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:05[ 539.314221] Pid: 202, comm: scsi_eh_0 Tainted: P 2.6.35-25-generic #44-Ubuntu 02:46:05[ 539.314224] Call Trace: 02:46:05[ 539.314233] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:05[ 539.314237] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:05[ 539.314242] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:05[ 539.314246] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:05[ 539.314256] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:05[ 539.314261] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:05[ 539.314266] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:05[ 539.314270] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:05[ 539.314275] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:05[ 539.314280] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:05[ 539.314284] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:05[ 539.314288] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:05[ 539.314291] ---[ end trace 76dbffc2d5d49d9b ]--- 02:46:05[ 539.314296] ata1: EH complete 02:46:12[ 547.040091] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen 02:46:12[ 547.040098] ata1.00: failed command: IDENTIFY DEVICE 02:46:12[ 547.040106] ata1.00: cmd ec/00:00:00:00:00/00:00:00:00:00/40 tag 0 pio 512 in 02:46:12[ 547.040108] res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 (timeout) 02:46:12[ 547.040111] ata1.00: status: { DRDY } 02:46:12[ 547.040117] ata1: hard resetting link 02:46:13[ 547.390144] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:13[ 547.408430] ata1.00: configured for UDMA/100 02:46:13[ 547.408438] ------------[ cut here ]------------ 02:46:13[ 547.408447] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:13[ 547.408451] Hardware name: MacBookPro5,3 02:46:13[ 547.408453] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:13[ 547.408528] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 02:46:13[ 547.408531] Call Trace: 02:46:13[ 547.408540] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:13[ 547.408544] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:13[ 547.408549] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:13[ 547.408553] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:13[ 547.408563] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:13[ 547.408567] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:13[ 547.408572] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:13[ 547.408577] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:13[ 547.408582] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:13[ 547.408587] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:13[ 547.408591] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:13[ 547.408595] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:13[ 547.408598] ---[ end trace 76dbffc2d5d49d9c ]--- 02:46:13[ 547.408620] ata1: EH complete 02:46:13[ 547.562470] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:13[ 547.671380] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:13[ 547.738198] ata1.00: limiting speed to UDMA/33:PIO4 02:46:13[ 547.738204] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:13[ 547.738208] ata1: irq_stat 0x00000040, connection status changed 02:46:13[ 547.738212] ata1: SError: { LinkSeq TrStaTrns DevExch } 02:46:13[ 547.738218] ata1: hard resetting link 02:46:13[ 547.738262] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:46:13[ 547.738265] ata2: irq_stat 0x00000040, connection status changed 02:46:13[ 547.738269] ata2: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:46:13[ 547.738274] ata2: hard resetting link 02:46:14[ 548.482561] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14[ 548.484083] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14[ 548.486809] ata2.00: configured for UDMA/100 02:46:14[ 548.486818] ata2: EH complete 02:46:14[ 548.498998] ata1.00: configured for UDMA/33 02:46:14[ 548.499004] ata1: EH complete 02:46:18[ 552.410499] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:18[ 552.522521] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:18[ 552.529674] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:18[ 552.529678] ata1: irq_stat 0x00000040, connection status changed 02:46:18[ 552.529680] ata1: SError: { LinkSeq TrStaTrns DevExch } 02:46:18[ 552.529684] ata1: hard resetting link 02:46:18[ 552.529716] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:18[ 552.529718] ata2: irq_stat 0x00000040, connection status changed 02:46:18[ 552.529720] ata2: SError: { LinkSeq TrStaTrns DevExch } 02:46:18[ 552.529723] ata2: hard resetting link 02:46:19[ 553.280059] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19[ 553.280068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19[ 553.284141] ata2.00: configured for UDMA/100 02:46:19[ 553.284150] ata2: EH complete 02:46:19[ 553.301629] ata1.00: configured for UDMA/33 02:46:19[ 553.301637] ata1: EH complete

    Read the article

  • MacBook Pro Late 2009 SATA Resets, Slowness (Is my motherboard dying on both machines?)

    - by A Student at a University
    My MacBook Pro runs slower the longer it's on. I am getting kernel warnings. Some, but not all, resets correlate with AC power connects and disconnects. I don't think the warnings do. (How do I tell?) What are these errors? What causes them? Can this damage the drive or corrupt data? What is it seeing that motivates these? 02:37:16[ 0.791992] ahci 0000:00:0b.0: PCI INT A -> Link[LSI0] -> GSI 20 (level, low) -> IRQ 20 02:37:16[ 0.792047] ahci 0000:00:0b.0: irq 43 for MSI/MSI-X 02:37:16[ 0.792053] ahci 0000:00:0b.0: controller can't do PMP, turning off CAP_PMP 02:37:16[ 0.792104] ahci 0000:00:0b.0: AHCI 0001.0200 32 slots 6 ports 1.5 Gbps 0x3 impl IDE mode 02:37:16[ 0.792107] ahci 0000:00:0b.0: flags: 64bit ncq sntf pm led pio slum part boh 02:37:16[ 0.792111] ahci 0000:00:0b.0: setting latency timer to 64 02:37:16[ 0.813473] scsi0 : ahci 02:37:16[ 0.823340] scsi1 : ahci 02:37:16[ 0.848164] ata1: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484100 irq 43 02:37:16[ 0.848166] ata2: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484180 irq 43 02:37:16[ 1.190132] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.190153] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.213568] ata1.00: ATA-8: OCZ-VERTEX2, 1.23, max UDMA/133 02:37:16[ 1.213572] ata1.00: 195371568 sectors, multi 1: LBA48 NCQ (depth 31/32) 02:37:16[ 1.227293] ata2.00: ATA-8: ST9500420ASG, 0002SDM1, max UDMA/133 02:37:16[ 1.227297] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32) 02:37:16[ 1.229570] ata2.00: configured for UDMA/133 02:37:16[ 1.240120] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:37:16[ 1.240123] ata2: irq_stat 0x00000040, connection status changed 02:37:16[ 1.240127] ata2: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:37:16[ 1.240133] ata2: hard resetting link 02:37:16[ 1.260738] ata1.00: configured for UDMA/133 02:37:16[ 1.280111] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:37:16[ 1.280114] ata1: irq_stat 0x00000040, connection status changed 02:37:16[ 1.280118] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:37:16[ 1.280122] ata1: hard resetting link 02:37:16[ 1.990101] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 1.994215] ata2.00: configured for UDMA/133 02:37:16[ 1.994220] ata2: EH complete 02:37:16[ 2.030097] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16[ 2.090773] ata1.00: configured for UDMA/133 02:37:16[ 2.090778] ata1: EH complete 02:37:16[ 2.090931] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX2 1.23 PQ: 0 ANSI: 5 02:37:16[ 2.091045] sd 0:0:0:0: Attached scsi generic sg0 type 0 02:37:16[ 2.091121] sd 0:0:0:0: [sda] 195371568 512-byte logical blocks: (100 GB/93.1 GiB) 02:37:16[ 2.091159] scsi 1:0:0:0: Direct-Access ATA ST9500420ASG 0002 PQ: 0 ANSI: 5 02:37:16[ 2.091163] sd 0:0:0:0: [sda] Write Protect is off 02:37:16[ 2.091165] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 02:37:16[ 2.091183] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16[ 2.091252] sd 1:0:0:0: Attached scsi generic sg1 type 0 02:37:16[ 2.091446] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) 02:37:16[ 2.091580] sd 1:0:0:0: [sdb] Write Protect is off 02:37:16[ 2.091582] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 02:37:16[ 2.091637] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16[ 2.093140] sd 0:0:0:0: [sda] Attached SCSI disk 02:37:16[ 2.093773] sd 1:0:0:0: [sdb] Attached SCSI disk 02:37:16[ 2.693899] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) 02:37:16[ 5.483492] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro 02:37:16[ 7.905040] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) 02:37:25[ 19.553095] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:25[ 19.555266] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:25[ 19.641532] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:25[ 19.641532] ata1: irq_stat 0x00000040, connection status changed 02:37:25[ 19.641532] ata1: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:25[ 19.641533] ata1: hard resetting link 02:37:25[ 19.642076] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:25[ 19.642078] ata2: irq_stat 0x00000040, connection status changed 02:37:25[ 19.642081] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:25[ 19.642084] ata2: hard resetting link 02:37:26[ 20.392606] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26[ 20.392610] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26[ 20.396697] ata2.00: configured for UDMA/133 02:37:26[ 20.396703] ata2: EH complete 02:37:26[ 20.451491] ata1.00: configured for UDMA/133 02:37:26[ 20.451498] ata1: EH complete 02:37:30[ 24.563725] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:30[ 24.565939] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:30[ 24.627236] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:37:30[ 24.627240] ata1: irq_stat 0x00000040, connection status changed 02:37:30[ 24.627242] ata1: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:37:30[ 24.627246] ata1: hard resetting link 02:37:30[ 24.632241] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:37:30[ 24.632244] ata2: irq_stat 0x00000040, connection status changed 02:37:30[ 24.632247] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:37:30[ 24.632250] ata2: hard resetting link 02:37:31[ 25.372582] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31[ 25.382615] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31[ 25.386782] ata2.00: configured for UDMA/133 02:37:31[ 25.386788] ata2: EH complete 02:37:31[ 25.431668] ata1.00: configured for UDMA/133 02:37:31[ 25.431674] ata1: EH complete 02:45:54[ 529.141844] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:55[ 529.544529] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:55[ 529.622561] ata1: limiting SATA link speed to 1.5 Gbps 02:45:55[ 529.622568] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:45:55[ 529.622572] ata1: irq_stat 0x00400040, connection status changed 02:45:55[ 529.622576] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:45:55[ 529.622583] ata1: hard resetting link 02:45:55[ 529.622609] ata2: limiting SATA link speed to 1.5 Gbps 02:45:55[ 529.622613] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:45:55[ 529.622616] ata2: irq_stat 0x00000040, connection status changed 02:45:55[ 529.622620] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:55[ 529.622624] ata2: hard resetting link 02:45:56[ 530.380135] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56[ 530.380157] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56[ 530.384305] ata2.00: configured for UDMA/133 02:45:56[ 530.384314] ata2: EH complete 02:45:56[ 530.399225] ata1.00: configured for UDMA/133 02:45:56[ 530.399233] ata1: EH complete 02:45:58[ 532.395990] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:45:58[ 532.518270] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:45:58[ 532.590968] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5840000 action 0xe frozen t4 02:45:58[ 532.590973] ata1: irq_stat 0x00000040, connection status changed 02:45:58[ 532.590977] ata1: SError: { CommWake LinkSeq TrStaTrns DevExch } 02:45:58[ 532.590983] ata1: hard resetting link 02:45:58[ 532.591034] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:45:58[ 532.591037] ata2: irq_stat 0x00000040, connection status changed 02:45:58[ 532.591041] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:58[ 532.591045] ata2: hard resetting link 02:45:59[ 533.340147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59[ 533.340168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59[ 533.344416] ata2.00: configured for UDMA/133 02:45:59[ 533.344424] ata2: EH complete 02:45:59[ 533.360839] ata1.00: configured for UDMA/133 02:45:59[ 533.360847] ata1: EH complete 02:45:59[ 533.584449] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:59[ 533.586999] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:59[ 533.660117] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:45:59[ 533.660122] ata2: irq_stat 0x00000040, connection status changed 02:45:59[ 533.660126] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:45:59[ 533.660132] ata2: hard resetting link 02:45:59[ 533.660141] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:45:59[ 533.660143] ata1: irq_stat 0x00000040, connection status changed 02:45:59[ 533.660147] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:45:59[ 533.660151] ata1: hard resetting link 02:46:00[ 534.412536] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00[ 534.412562] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00[ 534.416768] ata2.00: configured for UDMA/133 02:46:00[ 534.416777] ata2: EH complete 02:46:00[ 534.431396] ata1.00: configured for UDMA/133 02:46:00[ 534.431401] ata1: EH complete 02:46:03[ 537.384649] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:03[ 537.504214] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:03[ 537.585992] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:46:03[ 537.585996] ata1: irq_stat 0x00000040, connection status changed 02:46:03[ 537.585999] ata1: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:46:03[ 537.586002] ata1: hard resetting link 02:46:03[ 537.586028] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen t4 02:46:03[ 537.586030] ata2: irq_stat 0x00000040, connection status changed 02:46:03[ 537.586033] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:46:03[ 537.586036] ata2: hard resetting link 02:46:04[ 538.330147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04[ 538.330168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04[ 538.334389] ata2.00: configured for UDMA/133 02:46:04[ 538.334398] ata2: EH complete 02:46:04[ 538.343511] ata1.00: configured for UDMA/133 02:46:04[ 538.343519] ata1: EH complete 02:46:04[ 538.456413] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:04[ 538.459404] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:04[ 538.540138] ata1.00: limiting speed to UDMA/100:PIO4 02:46:04[ 538.540144] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5850000 action 0xe frozen 02:46:04[ 538.540148] ata1: irq_stat 0x00000040, connection status changed 02:46:04[ 538.540153] ata1: SError: { PHYRdyChg CommWake LinkSeq TrStaTrns DevExch } 02:46:04[ 538.540159] ata1: hard resetting link 02:46:04[ 538.540202] ata2.00: limiting speed to UDMA/100:PIO4 02:46:04[ 538.540207] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5950000 action 0xe frozen 02:46:04[ 538.540211] ata2: irq_stat 0x00000040, connection status changed 02:46:04[ 538.540215] ata2: SError: { PHYRdyChg CommWake Dispar LinkSeq TrStaTrns DevExch } 02:46:04[ 538.540220] ata2: hard resetting link 02:46:05[ 539.290054] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05[ 539.290041] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05[ 539.294100] ata2.00: configured for UDMA/100 02:46:05[ 539.294106] ata2: EH complete 02:46:05[ 539.314125] ata1.00: configured for UDMA/100 02:46:05[ 539.314132] ------------[ cut here ]------------ 02:46:05[ 539.314140] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:05[ 539.314144] Hardware name: MacBookPro5,3 02:46:05[ 539.314146] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:05[ 539.314221] Pid: 202, comm: scsi_eh_0 Tainted: P 2.6.35-25-generic #44-Ubuntu 02:46:05[ 539.314224] Call Trace: 02:46:05[ 539.314233] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:05[ 539.314237] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:05[ 539.314242] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:05[ 539.314246] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:05[ 539.314256] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:05[ 539.314261] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:05[ 539.314266] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:05[ 539.314270] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:05[ 539.314275] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:05[ 539.314280] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:05[ 539.314284] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:05[ 539.314288] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:05[ 539.314291] ---[ end trace 76dbffc2d5d49d9b ]--- 02:46:05[ 539.314296] ata1: EH complete 02:46:12[ 547.040091] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen 02:46:12[ 547.040098] ata1.00: failed command: IDENTIFY DEVICE 02:46:12[ 547.040106] ata1.00: cmd ec/00:00:00:00:00/00:00:00:00:00/40 tag 0 pio 512 in 02:46:12[ 547.040108] res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 (timeout) 02:46:12[ 547.040111] ata1.00: status: { DRDY } 02:46:12[ 547.040117] ata1: hard resetting link 02:46:13[ 547.390144] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:13[ 547.408430] ata1.00: configured for UDMA/100 02:46:13[ 547.408438] ------------[ cut here ]------------ 02:46:13[ 547.408447] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:13[ 547.408451] Hardware name: MacBookPro5,3 02:46:13[ 547.408453] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:13[ 547.408528] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 02:46:13[ 547.408531] Call Trace: 02:46:13[ 547.408540] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:13[ 547.408544] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:13[ 547.408549] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:13[ 547.408553] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:13[ 547.408563] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:13[ 547.408567] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:13[ 547.408572] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:13[ 547.408577] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:13[ 547.408582] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:13[ 547.408587] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:13[ 547.408591] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:13[ 547.408595] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:13[ 547.408598] ---[ end trace 76dbffc2d5d49d9c ]--- 02:46:13[ 547.408620] ata1: EH complete 02:46:13[ 547.562470] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:13[ 547.671380] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:13[ 547.738198] ata1.00: limiting speed to UDMA/33:PIO4 02:46:13[ 547.738204] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:13[ 547.738208] ata1: irq_stat 0x00000040, connection status changed 02:46:13[ 547.738212] ata1: SError: { LinkSeq TrStaTrns DevExch } 02:46:13[ 547.738218] ata1: hard resetting link 02:46:13[ 547.738262] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5900000 action 0xe frozen t4 02:46:13[ 547.738265] ata2: irq_stat 0x00000040, connection status changed 02:46:13[ 547.738269] ata2: SError: { Dispar LinkSeq TrStaTrns DevExch } 02:46:13[ 547.738274] ata2: hard resetting link 02:46:14[ 548.482561] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14[ 548.484083] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14[ 548.486809] ata2.00: configured for UDMA/100 02:46:14[ 548.486818] ata2: EH complete 02:46:14[ 548.498998] ata1.00: configured for UDMA/33 02:46:14[ 548.499004] ata1: EH complete 02:46:18[ 552.410499] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:18[ 552.522521] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:18[ 552.529674] ata1: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:18[ 552.529678] ata1: irq_stat 0x00000040, connection status changed 02:46:18[ 552.529680] ata1: SError: { LinkSeq TrStaTrns DevExch } 02:46:18[ 552.529684] ata1: hard resetting link 02:46:18[ 552.529716] ata2: exception Emask 0x10 SAct 0x0 SErr 0x5800000 action 0xe frozen t4 02:46:18[ 552.529718] ata2: irq_stat 0x00000040, connection status changed 02:46:18[ 552.529720] ata2: SError: { LinkSeq TrStaTrns DevExch } 02:46:18[ 552.529723] ata2: hard resetting link 02:46:19[ 553.280059] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19[ 553.280068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19[ 553.284141] ata2.00: configured for UDMA/100 02:46:19[ 553.284150] ata2: EH complete 02:46:19[ 553.301629] ata1.00: configured for UDMA/33 02:46:19[ 553.301637] ata1: EH complete

    Read the article

  • How to get users to commit and collaborate to make a website valuable? [closed]

    - by AzizAG
    I own a website that requires a fairly largish amount of users to collaborate and commit occasionally to make the website valuable, so basically, the website can't be any valuable without users helping me put some content on it. To not get confused, I'm thinking of websites like Wikipedia, Stack Exchange and Yahoo! Answers, most of the content is based on peer effort. How do they actually get users interested and committed in the first place? What are the things I have to do to get users involved in the website and actually help me grow it bigger?

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Can I specify the files to commit in subversion in a file rather than on the command line?

    - by René Nyffenegger
    I have renamed (with svn move) a lot of files in a subversion project. Now, I am trying to commit these on Window's cmd.exe. It seems that I hit a limit (probably by cmd.exe) in that the number of files is too long for the command line to swallow. Now, I thought and hoped that I could list the files to commit in a seperate file that I could specify with the commit command (something like svn ci --files-to-commit=renamed-files.txt -m "Renamed a lot of files" Yet, either such an option does not exist or I am unable to find this. Unfortunately, I cannot do a svn ci . as I have done other changes in the project as well. Neither can I do a svn ci *pattern-of-renamed-files* since this would only check in the added files, not the deleted ones. Before I start checking in the files with smaller chunks of files to check in (and thus increase the revision number uneccesserily without giving a hint as to the 'atomicity' of the operation) I thought I ask if this is indeed impossible to do.

    Read the article

  • Does Github.com have to create a merge commit when you merge from a fork ?

    - by Nishant
    I cloned the master and started doing he my work . Due to permissions I push the branch to my fork . I then sent a pull request to my master and someone with permission does the merge . I notice that Github.com creates a merge commit snapshot which to me looks like just a diff of the entire changes which is actually not necessary but helpful in the sense I can just look at merge commit to see the entire diff . I can see the same sha has as my own branch - hence it looks like the merge is an extra commit which probably aint nexeccary since its a fast forward ? master - a myfork(computer) - a->b->c myfork(github) - a->b->c Pull request myfork - master (which it says I can automatically merge) shows the entire diff and then when I merge it , it shows up as master - a->b->c-d . The d is a merge commit which I think it not really required because it is a fast forward ? Can someone explain why does this happen ? I think this is the same scenario if I rebase master if master had gone ahead , but that has not happened . Master is still at when I merge .

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • Fixing a bug while working on a different part of the code base

    - by imgx64
    This happened at least once to me. I'm working on some part of the code base and find a small bug in a different part, and the bug stops me from completing what I'm currently trying to do. Fixing the bug could be as simple as changing a single statement. What do you do in that situation? Fix the bug and commit it together with your current work Save your current work elsewhere, fix the bug in a separate commit, then continue your work [1] Continue what you're supposed to do, commit the code (even if it breaks the build fails some tests), then fix the bug (and the build make tests pass) in a separate commit [1] In practice, this would mean: clone the original repository elsewhere, fix the bug, commit/push the changes, pull the commit to the repository you're working on, merge the changes, and continue your work. Edit: I changed number three to reflect what I really meant.

    Read the article

  • How do you get Eclipse/Mylyn to fill out your commit messages for you?

    - by Sam Hasler
    I've setup the following: Installed Mylyn in Eclipse Installed the Bugzilla connector Installed Subversive SVN Integration for the Mylyn Project I've gone to Windows - Preferences - Tasks - Team and clicked Change Set Management and left it with the default Commit Comment Template: ${task.status} - ${connector.task.prefix} ${task.key}: ${task.description} ${task.url} However, if I activate a bugzilla bug in the Task List, and then edit a file, when I commit the changes the commit message isn't filled in. Also, in the Synchronisation perspective there isn't a change set for the task I'm working on. I've tried following the instructions on the Eclipse wiki's Mylyn FAQ for Why does task change set not appear when I modify files? but the bullet point: * Verify that the configured Synchronize View is configured for change sets. points to a section that is no longer in the document. I have a Show Change Sets button, but clicking it only shows me incoming change sets, there aren't any outgoing change sets. What am I missing?

    Read the article

  • git hooks - regenerate a file and add it to each commit ?

    - by egarcia
    I'd like to automatically generate a file and add it to a commit if it has changed. Is it possible, if so, what hooks should I use? Context: I'm programming a CSS library. It has several CSS files, and at the end I want to produce a compacted and minimized version. Right now my workflow is: Modify the css files x.css and y.css git add x.css y.css Execute minimize.sh which parses all the css files on my lib, minimizes them and produces a min.css file git add min.css git commit -m 'modified x and y doing foo and bar' I would like to have steps 3 and 4 done automatically via a git hook. Is that possible? I've never used git hooks before. After reading the man page, I think I need to use the @pre-commit@ hook. But can I invoke git add min.css, or will I break the internet?

    Read the article

  • How can I use git to stage only one line in a file for commit, all from a script?

    - by Sandy
    I'm writing a simple pre-commit git hook that updates the year in copyright headers for files that are staged for commit. After modifying the line with the copyright, I would like the hook to stage that line so that it is part of the commit. It can't just git add the whole file, because there may be other pre-existing changes in there that shouldn't be staged. I don't see any options in the git add manual the let you stage specific lines. I figure I could git stash save --keep-index, apply my change, git add the file, and then git stash pop, but that seems rather crude. Any better approaches?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Useful SVN and Git commands – Cheatsheet

    - by Madhan ayyasamy
    The following snippets will helpful one who user version control systems like Git and SVN.svn checkout/co checkout-url – used to pull an SVN tree from the server.svn update/up – Used to update the local copy with the changes made in the repository.svn commit/ci – m “message” filename – Used to commit the changes in a file to repository with a message.svn diff filename – shows up the differences between your current file and what’s there now in the repository.svn revert filename – To overwrite local file with the one in the repository.svn add filename – For adding a file into repository, you should commit your changes then only it will reflect in repository.svn delete filename – For deleting a file from repository, you should commit your changes then only it will reflect in repository.svn move source destination – moves a file from one directory to another or renames a file. It will effect your local copy immediately as well as on the repository after committing.git config – Sets configuration values for your user name, email, file formats and more.git init – Initializes a git repository – creates the initial ‘.git’ directory in a new or in an existing project.git clone – Makes a Git repository copy from a remote source. Also adds the original location as a remote so you can fetch from it again and push to it if you have permissions.git add – Adds files changes in your working directory to your index.git rm – Removes files from your index and your working directory so they will not be tracked.git commit – Takes all of the changes written in the index, creates a new commit object pointing to it and sets the branch to point to that new commit.git status – Shows you the status of files in the index versus the working directory.git branch – Lists existing branches, including remote branches if ‘-a’ is provided. Creates a new branch if a branch name is provided.git checkout – Checks out a different branch – switches branches by updating the index, working tree, and HEAD to reflect the chosen branch.git merge – Merges one or more branches into your current branch and automatically creates a new commit if there are no conflicts.git reset – Resets your index and working directory to the state of your last commit.git tag – Tags a specific commit with a simple, human readable handle that never moves.git pull – Fetches the files from the remote repository and merges it with your local one.git push – Pushes all the modified local objects to the remote repository and advances its branches.git remote – Shows all the remote versions of your repository.git log – Shows a listing of commits on a branch including the corresponding details.git show – Shows information about a git object.git diff – Generates patch files or statistics of differences between paths or files in your git repository, or your index or your working directory.gitk – Graphical Tcl/Tk based interface to a local Git repository.

    Read the article

  • How can I mark a group of changes/changesets in SVN, Hg, or Git

    - by sylvanaar
    I would like to mark an arbitrary group of commits/changesets with a label. Commit 1 *Mark 1 Commit 2 *Mark 2 Commit 3 Commit 4 *Mark 1 Commit 5 *Mark 2 The goal is to easily locate all the changes for a specific mark, and to have that grouping persisted in the VCS directly, as opposed to some outside system like a bug tracking system. The location and ordering of the marks needs to be arbitrary, and should be able to work with both committed/uncommitted and pushed/unpushed changes. In SVN the best way I know is to just edit the commit notes and add some sort of special text that you can search for e.g. "**Mark 1". Or just to make a fake edit and commit it and use its commit note to list all the included revisions. Is there a better solution for SVN? Are there equivalent or better solutions for Hg or Git?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >