Search Results

Search found 7222 results on 289 pages for 'storage cells'.

Page 99/289 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • First and last UITableViewCell keep changing while scrolling.

    - by W Dyson
    I have a tableView with cells containing one UITextField as a subview for each cell. My problem is that when I scroll down, the text in the first cell is duplicated in the last cell. I can't for the life if me figure out why. I have tried loading the cells from different nibs, having the textFields as ivars. The UITextFields don't seem to be the problem, I'm thinking it has something to do with the tableView reusing the cells. The textFields all have a data source that keeps track of the text within the textField and the text is reset each time the cell is shown. Any ideas? UPDATE: Thanks guys, here's a sample: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"Section %i, Row %i", indexPath.section, indexPath.row); static NSString *JournalCellIdentifier = @"JournalCellIdentifier"; UITableViewCell *cell = (UITableViewCell *)[self.tableView dequeueReusableCellWithIdentifier:JournalCellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:JournalCellIdentifier] autorelease]; cell.selectionStyle = UITableViewCellSelectionStyleNone; cell.accessoryType = UITableViewCellAccessoryNone; } if (indexPath.section == 0) { UITextField *textField = (UITextField *)[self.authorCell viewWithTag:1]; [cell addSubview:textField]; self.authorTextField = textField; self.authorTextField.text = [self.textFieldDictionary objectForKey:@"author"]; NSLog(@"Reading Author:%@", [self.textFieldDictionary objectForKey:@"author"]); } else if (indexPath.section == 1) { UITextField *textField = (UITextField *)[self.yearCell viewWithTag:1]; [cell addSubview:textField]; self.yearTextField = textField; self.yearTextField.text = [self.textFieldDictionary objectForKey:@"year"]; NSLog(@"Reading Year:%@", [self.textFieldDictionary objectForKey:@"year"]); } else if (indexPath.section == 2) { UITextField *textField = (UITextField *)[self.volumeCell viewWithTag:1]; [cell addSubview:textField]; self.volumeTextField = textField; self.volumeTextField.text = [self.textFieldDictionary objectForKey:@"volume"]; NSLog(@"Reading Volume:%@", [self.textFieldDictionary objectForKey:@"volume"]); } return cell; }

    Read the article

  • How to have an excel addin read rows from a worksheet until no more data?

    - by user169867
    I've started writing a Com addin for Excel 2003 using C#. I'm looking for a code example showing how to read in cell data from the active worksheet. I've seen that you can write code like this: Excel.Range firstCell = ws.get_Range("A1", Type.Missing); Excel.Range lastCell = ws.get_Range("A10", Type.Missing); Excel.Range worksheetCells = ws.get_Range(firstCell, lastCell); to grab a range of cells. What I could use help with is how to read the cell data when you don't know how many rows of data there are. I may be able to determine the starting row that the data will be begin at, but there will be an unkown number of rows of data to read. Could someone provide me w/ an example of how to read rows from the worksheet until you come across a row of empty cells? Also does anyone know how to grab the range of cells the user has selected? Any help would be greatly appreciated. This seems like a powerful dev tool, but I'm having trouble finding detailed documentation to help me learn it :)

    Read the article

  • connecting to mysql from excel: ODBC driver does not support the requested properties.

    - by every_answer_gets_a_point
    i am trying to add data to mysql from excel. i am getting the above error on this line: rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic here is my code: Dim oConn As ADODB.Connection Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=some_pass;" & _ "Option=3" End Sub Function esc(txt As String) esc = Trim(Replace(txt, "'", "\'")) End Function Private Sub InsertData() Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks For rowCursor = 2 To 11 strSQL = "INSERT INTO tutorial (author, title, price) " & _ "VALUES ('" & esc(.Cells(rowCursor, 1)) & "', " & _ "'" & esc(.Cells(rowCursor, 2)) & "', " & _ esc(.Cells(rowCursor, 3)) & ")" rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic Next End With End Sub whats wrong with rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic ? why am i getting the odbc error?

    Read the article

  • NSString , EXC_BAD_ACCESS and stringByAppendingString

    - by amnon
    Hi, i wrote the following code : NSString *table = [[NSString alloc]initWithString:@"<table>"] ; for (NSDictionary *row in rows) { table = [table stringByAppendingString:@"<tr>"]; NSArray *cells = [row objectForKey:@"cells"]; for (NSString *cell in cells){ table = [table stringByAppendingString:@"<td>"]; table = [table stringByAppendingString:cell]; table = [table stringByAppendingString:@"</td>"]; } table = [table stringByAppendingString:@"</tr>"]; index++; } table = [table stringByAppendingString:@"</table>"]; return [table autorelease]; the rows are json response parsed by jsonlib this string is returned to a viewcontroller that loads it into a webview ,but i keep getting exc_bad_access for this code on completion of the method that calls it , i don't even have to load it to the webview just calling this method and not using the nsstring causes the error on completion of the calling method... any help will be apperciated thanks .

    Read the article

  • Differentiate Between UITableView Editing States?

    - by Josh Kahane
    I have been looking at trying to differentiate between editing states in my UITableView. I need to call a method only when in editing mode after tapping the edit button, so when you get your cell slide in and you see the little circular delete icons but NOT when the user swipes to delete. Is there anyway I can differentiate between the two? Thanks. EDIT: Solution thanks to Rodrigo Both each cell and the entire tableview has an 'editing' BOOL value, so I loop through all the cells and if more than one of them is editing then we know the whole table is (the user tapped the edit button), however if only one is editing then we know that the user has swiped a cell, editing that individual one, this lets me deal with each editing state individually! - (void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; int i = 0; //When editing loop through cells and hide status image so it doesn't block delete controls. Fade back in when done editing. for (customGuestCell *cell in self.tableView.visibleCells) { if (cell.isEditing) { i += 1; } } if (i > 1) { for (customGuestCell *cell in self.tableView.visibleCells) { if (editing) { // loop through the visible cells and animate their imageViews [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.4]; cell.statusImg.alpha = 0; [UIView commitAnimations]; } } } else if (!editing) { for (customGuestCell *cell in self.tableView.visibleCells) { [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.4]; cell.statusImg.alpha = 1.0; [UIView commitAnimations]; } } }

    Read the article

  • Is there a way to update the height of a single UITableViewCell, without recalculating the height for every cell?

    - by Chris Vasselli
    I have a UITableView with a few different sections. One section contains cells that will resize as a user types text into a UITextView. Another section contains cells that render HTML content, for which calculating the height is relatively expensive. Right now when the user types into the UITextView, in order to get the table view to update the height of the cell, I call [self.tableView beginUpdates]; [self.tableView endUpdates]; However, this causes the table to recalculate the height of every cell in the table, when I really only need to update the single cell that was typed into. Not only that, but instead of recalculating the estimated height using tableView:estimatedHeightForRowAtIndexPath:, it calls tableView:heightForRowAtIndexPath: for every cell, even those not being displayed. Is there any way to ask the table view to update just the height of a single cell, without doing all of this unnecessary work? Update I'm still looking for a solution to this. As suggested, I've tried using reloadRowsAtIndexPaths:, but it doesn't look like this will work. Calling reloadRowsAtIndexPaths: with even a single row will still cause heightForRowAtIndexPath: to be called for every row, even though cellForRowAtIndexPath: will only be called for the row you requested. In fact, it looks like any time a row is inserted, deleted, or reloaded, heightForRowAtIndexPath: is called for every row in the table cell. I've also tried putting code in willDisplayCell:forRowAtIndexPath: to calculate the height just before a cell is going to appear. In order for this to work, I would need to force the table view to re-request the height for the row after I do the calculation. Unfortunately, calling [self.tableView beginUpdates]; [self.tableView endUpdates]; from willDisplayCell:forRowAtIndexPath: causes an index out of bounds exception deep in UITableView's internal code. I guess they don't expect us to do this. I can't help but feel like it's a bug in the SDK that in response to [self.tableView endUpdates] it doesn't call estimatedHeightForRowAtIndexPath: for cells that aren't visible, but I'm still trying to find some kind of workaround. Any help is appreciated.

    Read the article

  • External USB drive is failing

    - by dma_k
    I have an external USB 2.0 drive WD My Book Mirror Edition, running in RAID 1 (mirroring) mode. A while ago the hard drive started to fail: it stops responding (directories are not listed returning an error after a big timeout). Sometimes it works for weeks before a failure, sometimes – few hours. Small write operations (like removing few files or editing a small file) do not harm, but when copying large files to the drive over the network, or creating the archive locally, the kernel dumps. Also interesting to note that once kernel has failed, Linux does not want to reboot normally (reboot hangs); when Linux box is shutdown with power button, WD drive does not go to sleep mode (as it usually does): leds continue to run, pressing and holding the "shutdown" button on drive's back panel does not do anything; only unplugging the power cord helps. Here goes the boot log: Aug 16 00:32:21 kernel: [ 1.514106] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Aug 16 00:32:21 kernel: [ 1.657738] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 Aug 16 00:32:21 kernel: [ 1.673747] ehci_hcd 0000:00:1d.7: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 1.673751] ehci_hcd 0000:00:1d.7: EHCI Host Controller Aug 16 00:32:21 kernel: [ 1.725224] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 Aug 16 00:32:21 kernel: [ 1.741647] ehci_hcd 0000:00:1d.7: using broken periodic workaround Aug 16 00:32:21 kernel: [ 1.761790] ehci_hcd 0000:00:1d.7: cache line size of 32 is not supported Aug 16 00:32:21 kernel: [ 1.761873] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfdfff000 Aug 16 00:32:21 kernel: [ 1.796043] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 Aug 16 00:32:21 kernel: [ 1.879069] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 Aug 16 00:32:21 kernel: [ 1.895446] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 1.911796] usb usb1: Product: EHCI Host Controller Aug 16 00:32:21 kernel: [ 1.928015] usb usb1: Manufacturer: Linux 2.6.32-5-686 ehci_hcd Aug 16 00:32:21 kernel: [ 1.944331] usb usb1: SerialNumber: 0000:00:1d.7 Aug 16 00:32:21 kernel: [ 1.961285] usb usb1: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 1.994412] hub 1-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.010864] hub 1-0:1.0: 8 ports detected Aug 16 00:32:21 kernel: [ 2.085939] uhci_hcd: USB Universal Host Controller Interface driver Aug 16 00:32:21 kernel: [ 2.191945] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 Aug 16 00:32:21 kernel: [ 2.226029] uhci_hcd 0000:00:1d.0: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.226034] uhci_hcd 0000:00:1d.0: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.243237] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 Aug 16 00:32:21 kernel: [ 2.260390] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fe00 Aug 16 00:32:21 kernel: [ 2.277517] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.294815] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.312173] usb usb2: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.329534] usb usb2: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 2.346828] usb usb2: SerialNumber: 0000:00:1d.0 Aug 16 00:32:21 kernel: [ 2.412989] usb usb2: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 2.430651] usb 1-2: new high speed USB device using ehci_hcd and address 2 Aug 16 00:32:21 kernel: [ 2.449046] hub 2-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.466514] hub 2-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 2.484639] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 Aug 16 00:32:21 kernel: [ 2.537750] uhci_hcd 0000:00:1d.1: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.537756] uhci_hcd 0000:00:1d.1: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.555085] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 Aug 16 00:32:21 kernel: [ 2.572231] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fd00 Aug 16 00:32:21 kernel: [ 2.589593] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.606869] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.624134] usb usb3: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.641329] usb usb3: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 2.658505] usb usb3: SerialNumber: 0000:00:1d.1 Aug 16 00:32:21 kernel: [ 2.675843] usb usb3: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 2.692864] hub 3-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.709651] hub 3-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 2.727378] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Aug 16 00:32:21 kernel: [ 2.768252] uhci_hcd 0000:00:1d.2: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.768258] uhci_hcd 0000:00:1d.2: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.806679] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 Aug 16 00:32:21 kernel: [ 2.824117] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000fc00 Aug 16 00:32:21 kernel: [ 2.841405] usb 1-2: New USB device found, idVendor=1058, idProduct=1104 Aug 16 00:32:21 kernel: [ 2.858448] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Aug 16 00:32:21 kernel: [ 2.875347] usb 1-2: Product: My Book Aug 16 00:32:21 kernel: [ 2.892113] usb 1-2: Manufacturer: Western Digital Aug 16 00:32:21 kernel: [ 2.908915] usb 1-2: SerialNumber: 575532553130303530353538 Aug 16 00:32:21 kernel: [ 2.943242] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.960405] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.977615] usb usb4: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.994687] usb usb4: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 3.011711] usb usb4: SerialNumber: 0000:00:1d.2 Aug 16 00:32:21 kernel: [ 3.029589] usb usb4: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.082027] sd 2:0:0:0: [sda] Attached SCSI disk Aug 16 00:32:21 kernel: [ 3.103953] usb 1-2: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.122625] hub 4-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 3.140484] hub 4-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 3.161680] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 Aug 16 00:32:21 kernel: [ 3.181257] uhci_hcd 0000:00:1d.3: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 3.181263] uhci_hcd 0000:00:1d.3: UHCI Host Controller Aug 16 00:32:21 kernel: [ 3.198614] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 5 Aug 16 00:32:21 kernel: [ 3.216012] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000fb00 Aug 16 00:32:21 kernel: [ 3.249877] Uniform CD-ROM driver Revision: 3.20 Aug 16 00:32:21 kernel: [ 3.267765] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 3.284947] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 3.302023] usb usb5: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 3.319215] usb usb5: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 3.336298] usb usb5: SerialNumber: 0000:00:1d.3 Aug 16 00:32:21 kernel: [ 3.368377] Initializing USB Mass Storage driver... Aug 16 00:32:21 kernel: [ 3.390652] usbcore: registered new interface driver hiddev Aug 16 00:32:21 kernel: [ 3.408109] scsi4 : SCSI emulation for USB Mass Storage devices Aug 16 00:32:21 kernel: [ 3.425281] sr 0:0:1:0: Attached scsi CD-ROM sr0 Aug 16 00:32:21 kernel: [ 3.438978] sr 0:0:1:0: Attached scsi generic sg0 type 5 Aug 16 00:32:21 kernel: [ 3.456328] usbcore: registered new interface driver usb-storage Aug 16 00:32:21 kernel: [ 3.474564] usb-storage: device found at 2 Aug 16 00:32:21 kernel: [ 3.474567] usb-storage: waiting for device to settle before scanning Aug 16 00:32:21 kernel: [ 3.475320] sd 2:0:0:0: Attached scsi generic sg1 type 0 Aug 16 00:32:21 kernel: [ 3.492587] USB Mass Storage support registered. Aug 16 00:32:21 kernel: [ 3.510930] usb usb5: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.531076] hub 5-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 3.548399] hub 5-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 3.591743] input: Western Digital My Book as /devices/pci0000:00/0000:00:1d.7/usb1/1-2/1-2:1.1/input/input2 Aug 16 00:32:21 kernel: [ 3.609515] generic-usb 0003:1058:1104.0001: input,hidraw0: USB HID v1.11 Device [Western Digital My Book] on usb-0000:00:1d.7-2/input1 Aug 16 00:32:21 kernel: [ 3.627466] usbcore: registered new interface driver usbhid Aug 16 00:32:21 kernel: [ 8.581664] usb-storage: device scan complete Aug 16 00:32:21 kernel: [ 8.624270] scsi 4:0:0:0: Direct-Access WD My Book 1008 PQ: 0 ANSI: 4 Aug 16 00:32:21 kernel: [ 8.655135] scsi 4:0:0:1: Enclosure WD My Book Device 1008 PQ: 0 ANSI: 4 Aug 16 00:32:21 kernel: [ 8.675393] sd 4:0:0:0: Attached scsi generic sg2 type 0 Aug 16 00:32:21 kernel: [ 8.698669] scsi 4:0:0:1: Attached scsi generic sg3 type 13 Aug 16 00:32:21 kernel: [ 8.723370] sd 4:0:0:0: [sdb] 1953513472 512-byte logical blocks: (1.00 TB/931 GiB) Aug 16 00:32:21 kernel: [ 8.750477] sd 4:0:0:0: [sdb] Write Protect is off Aug 16 00:32:21 kernel: [ 8.769411] sd 4:0:0:0: [sdb] Mode Sense: 10 00 00 00 Aug 16 00:32:21 kernel: [ 8.769414] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.822971] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.841978] sdb: sdb1 Aug 16 00:32:21 kernel: [ 8.905580] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.924173] sd 4:0:0:0: [sdb] Attached SCSI disk Aug 16 00:32:21 kernel: [ 11.600492] XFS mounting filesystem sdb1 Aug 16 00:32:21 kernel: [ 12.222948] Ending clean XFS mount for filesystem: sdb1 After a while the following appears in a log: Aug 16 09:30:56 kernel: [32359.112029] usb 1-2: reset high speed USB device using ehci_hcd and address 2 Aug 16 09:31:59 kernel: [32422.112035] usb 1-2: reset high speed USB device using ehci_hcd and address 2 Aug 16 09:33:00 kernel: [32483.112029] usb 1-2: reset high speed USB device using ehci_hcd and address 2 And then it is followed by few kernel dumps, which I think, are not good: Aug 16 09:33:40 kernel: [32520.428027] INFO: task xfssyncd:1002 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32520.462689] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32520.497422] xfssyncd D c3d84a60 0 1002 2 0x00000000 Aug 16 09:33:40 kernel: [32520.532117] f6c9aa80 00000046 c1132742 c3d84a60 00000286 c1418100 c1418100 00000000 Aug 16 09:33:40 kernel: [32520.566867] f6c9ac3c c2808100 00000000 f653b18b 00001d76 00000001 f6c9aa80 c3c3f0e0 Aug 16 09:33:40 kernel: [32520.601343] 08e59242 f6c9ac3c 2e41392b 00000000 08e59242 00000000 c3f7fb48 0067385a Aug 16 09:33:40 kernel: [32520.635533] Call Trace: Aug 16 09:33:40 kernel: [32520.668991] [<c1132742>] ? cfq_set_request+0x0/0x290 Aug 16 09:33:40 kernel: [32520.702804] [<c126b532>] ? io_schedule+0x5f/0x98 Aug 16 09:33:40 kernel: [32520.736555] [<c1128be0>] ? get_request_wait+0xcb/0x146 Aug 16 09:33:40 kernel: [32520.770360] [<c10437ba>] ? autoremove_wake_function+0x0/0x2d Aug 16 09:33:40 kernel: [32520.804110] [<c112907c>] ? __make_request+0x2cc/0x3d9 Aug 16 09:33:40 kernel: [32520.837713] [<c1128230>] ? blk_peek_request+0x135/0x143 Aug 16 09:33:40 kernel: [32520.871265] [<f8582987>] ? scsi_dispatch_cmd+0x185/0x1e5 [scsi_mod] Aug 16 09:33:40 kernel: [32520.904407] [<c1127cf1>] ? generic_make_request+0x266/0x2b4 Aug 16 09:33:40 kernel: [32520.937007] [<c10cf821>] ? bvec_alloc_bs+0x95/0xaf Aug 16 09:33:40 kernel: [32520.969033] [<c1127dfb>] ? submit_bio+0xbc/0xd6 Aug 16 09:33:40 kernel: [32521.000485] [<c10cffd1>] ? bio_add_page+0x28/0x2e Aug 16 09:33:40 kernel: [32521.031403] [<f8918d38>] ? _xfs_buf_ioapply+0x206/0x22b [xfs] Aug 16 09:33:40 kernel: [32521.061888] [<f89197bd>] ? xfs_buf_iorequest+0x38/0x60 [xfs] Aug 16 09:33:40 kernel: [32521.091845] [<f8907230>] ? xlog_bdstrat_cb+0x16/0x3d [xfs] Aug 16 09:33:40 kernel: [32521.121222] [<f8905781>] ? XFS_bwrite+0x32/0x64 [xfs] Aug 16 09:33:40 kernel: [32521.150007] [<f89059be>] ? xlog_sync+0x20b/0x311 [xfs] Aug 16 09:33:40 kernel: [32521.178214] [<f89112fc>] ? xfs_trans_ail_tail+0x12/0x27 [xfs] Aug 16 09:33:40 kernel: [32521.205914] [<f8906261>] ? xlog_state_sync_all+0xa2/0x141 [xfs] Aug 16 09:33:40 kernel: [32521.233074] [<f8906611>] ? _xfs_log_force+0x51/0x68 [xfs] Aug 16 09:33:40 kernel: [32521.259664] [<c103abaf>] ? process_timeout+0x0/0x5 Aug 16 09:33:40 kernel: [32521.285662] [<f8906636>] ? xfs_log_force+0xe/0x27 [xfs] Aug 16 09:33:40 kernel: [32521.311171] [<f89202df>] ? xfs_sync_worker+0x17/0x5c [xfs] Aug 16 09:33:40 kernel: [32521.336117] [<f891fbb7>] ? xfssyncd+0x134/0x17d [xfs] Aug 16 09:33:40 kernel: [32521.360498] [<f891fa83>] ? xfssyncd+0x0/0x17d [xfs] Aug 16 09:33:40 kernel: [32521.384211] [<c1043588>] ? kthread+0x61/0x66 Aug 16 09:33:40 kernel: [32521.407890] [<c1043527>] ? kthread+0x0/0x66 Aug 16 09:33:40 kernel: [32521.430876] [<c1003d47>] ? kernel_thread_helper+0x7/0x10 Aug 16 09:33:40 kernel: [32521.453394] INFO: task flush-8:16:12945 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32521.476116] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32521.498579] flush-8:16 D 00000000 0 12945 2 0x00000000 Aug 16 09:33:40 kernel: [32521.520649] f4e4d540 00000046 e412e940 00000000 00000002 c1418100 c1418100 c14136ac Aug 16 09:33:40 kernel: [32521.542426] f4e4d6fc c2808100 00000000 00000000 000008b4 00000001 f4e4d540 c3c3f0e0 Aug 16 09:33:40 kernel: [32521.563745] 02e905a8 f4e4d6fc 007a5399 00000000 02e905a8 00000000 f4e2db48 00670b98 Aug 16 09:33:40 kernel: [32521.585077] Call Trace: Aug 16 09:33:40 kernel: [32521.605790] [<c126b532>] ? io_schedule+0x5f/0x98 Aug 16 09:33:40 kernel: [32521.626184] [<c1128be0>] ? get_request_wait+0xcb/0x146 Aug 16 09:33:40 kernel: [32521.646133] [<c10437ba>] ? autoremove_wake_function+0x0/0x2d Aug 16 09:33:40 kernel: [32521.665659] [<c112907c>] ? __make_request+0x2cc/0x3d9 Aug 16 09:33:40 kernel: [32521.684716] [<f891796e>] ? xfs_convert_page+0x30a/0x331 [xfs] Aug 16 09:33:40 kernel: [32521.703366] [<c1127cf1>] ? generic_make_request+0x266/0x2b4 Aug 16 09:33:40 kernel: [32521.721644] [<c10cf821>] ? bvec_alloc_bs+0x95/0xaf Aug 16 09:33:40 kernel: [32521.739465] [<c1127dfb>] ? submit_bio+0xbc/0xd6 Aug 16 09:33:40 kernel: [32521.756896] [<c10cfa45>] ? bio_alloc_bioset+0x7b/0xba Aug 16 09:33:40 kernel: [32521.774046] [<f8917af0>] ? xfs_submit_ioend_bio+0x3b/0x44 [xfs] Aug 16 09:33:40 kernel: [32521.790694] [<f8917ba3>] ? xfs_submit_ioend+0xaa/0xc4 [xfs] Aug 16 09:33:40 kernel: [32521.806736] [<f891817d>] ? xfs_page_state_convert+0x5c0/0x61c [xfs] Aug 16 09:33:40 kernel: [32521.822859] [<c113705b>] ? __lookup_tag+0x8e/0xee Aug 16 09:33:40 kernel: [32521.838958] [<f891840d>] ? xfs_vm_writepage+0x91/0xc4 [xfs] Aug 16 09:33:40 kernel: [32521.855039] [<c108bbcc>] ? __writepage+0x8/0x22 Aug 16 09:33:40 kernel: [32521.871067] [<c108c17b>] ? write_cache_pages+0x1af/0x29f Aug 16 09:33:40 kernel: [32521.886616] [<c108bbc4>] ? __writepage+0x0/0x22 Aug 16 09:33:40 kernel: [32521.901593] [<c108c285>] ? generic_writepages+0x1a/0x21 Aug 16 09:33:40 kernel: [32521.916455] [<f8918338>] ? xfs_vm_writepages+0x0/0x38 [xfs] Aug 16 09:33:40 kernel: [32521.931484] [<c108c2a5>] ? do_writepages+0x19/0x25 Aug 16 09:33:40 kernel: [32521.946648] [<c10c80d9>] ? writeback_single_inode+0xc7/0x273 Aug 16 09:33:40 kernel: [32521.961675] [<c10c8c44>] ? writeback_inodes_wb+0x3dd/0x49c Aug 16 09:33:40 kernel: [32521.976831] [<c10c8e18>] ? wb_writeback+0x115/0x178 Aug 16 09:33:40 kernel: [32521.991778] [<c10c901f>] ? wb_do_writeback+0x121/0x131 Aug 16 09:33:40 kernel: [32522.006538] [<c103abaf>] ? process_timeout+0x0/0x5 Aug 16 09:33:40 kernel: [32522.021091] [<c10c9050>] ? bdi_writeback_task+0x21/0x89 Aug 16 09:33:40 kernel: [32522.035493] [<c10979e5>] ? bdi_start_fn+0x59/0xa4 Aug 16 09:33:40 kernel: [32522.049765] [<c109798c>] ? bdi_start_fn+0x0/0xa4 Aug 16 09:33:40 kernel: [32522.063792] [<c1043588>] ? kthread+0x61/0x66 Aug 16 09:33:40 kernel: [32522.077612] [<c1043527>] ? kthread+0x0/0x66 Aug 16 09:33:40 kernel: [32522.091260] [<c1003d47>] ? kernel_thread_helper+0x7/0x10 Aug 16 09:33:40 kernel: [32522.104966] INFO: task smartctl:13098 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32522.118883] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32522.133012] smartctl D 00000020 0 13098 13097 0x00000000 Aug 16 09:33:40 kernel: [32522.147221] e50b9540 00000086 c11d28a8 00000020 00000770 c1418100 c1418100 c14136ac Aug 16 09:33:40 kernel: [32522.161720] e50b96fc c2808100 00000000 e53e8800 00000000 00000020 c3cec000 c13886c0 Aug 16 09:33:40 kernel: [32522.176217] f99dab68 e50b96fc 007a4f1e 00000001 c4082f24 c4082ed8 00000001 c3c3f0e0 Aug 16 09:33:40 kernel: [32522.190737] Call Trace: Aug 16 09:33:40 kernel: [32522.205038] [<c11d28a8>] ? __netdev_alloc_skb+0x14/0x2d Aug 16 09:33:40 kernel: [32522.219605] [<c126b799>] ? schedule_timeout+0x20/0xb0 Aug 16 09:33:40 kernel: [32522.234144] [<c112820d>] ? blk_peek_request+0x112/0x143 Aug 16 09:33:40 kernel: [32522.248649] [<f85873b6>] ? scsi_request_fn+0x3c1/0x47a [scsi_mod] Aug 16 09:33:40 kernel: [32522.263233] [<c103aba8>] ? del_timer+0x55/0x5c Aug 16 09:33:40 kernel: [32522.277773] [<c126b6a2>] ? wait_for_common+0xa4/0x100 Aug 16 09:33:40 kernel: [32522.292342] [<c102cd8d>] ? default_wake_function+0x0/0x8 Aug 16 09:33:40 kernel: [32522.306958] [<c112b3d1>] ? blk_execute_rq+0x8b/0xb2 Aug 16 09:33:40 kernel: [32522.321569] [<c112b2ac>] ? blk_end_sync_rq+0x0/0x23 Aug 16 09:33:40 kernel: [32522.336070] [<c112b58b>] ? blk_recount_segments+0x13/0x20 Aug 16 09:33:40 kernel: [32522.350583] [<c1127307>] ? blk_rq_bio_prep+0x44/0x74 Aug 16 09:33:40 kernel: [32522.365059] [<c112b0b2>] ? blk_rq_map_kern+0xc5/0xee Aug 16 09:33:40 kernel: [32522.379439] [<c112e2a5>] ? sg_scsi_ioctl+0x221/0x2aa Aug 16 09:33:40 kernel: [32522.393801] [<c112e672>] ? scsi_cmd_ioctl+0x344/0x39a Aug 16 09:33:40 kernel: [32522.408140] [<c1024c87>] ? update_curr+0x106/0x1b3 Aug 16 09:33:40 kernel: [32522.422566] [<c1024c87>] ? update_curr+0x106/0x1b3 Aug 16 09:33:40 kernel: [32522.436832] [<f87676aa>] ? sd_ioctl+0x90/0xb5 [sd_mod] Aug 16 09:33:40 kernel: [32522.451228] [<c112c35f>] ? __blkdev_driver_ioctl+0x53/0x63 Aug 16 09:33:40 kernel: [32522.465689] [<c112cbbf>] ? blkdev_ioctl+0x850/0x891 Aug 16 09:33:40 kernel: [32522.479982] [<c1020474>] ? __wake_up_common+0x34/0x59 Aug 16 09:33:40 kernel: [32522.494138] [<c10244cd>] ? complete+0x28/0x36 Aug 16 09:33:40 kernel: [32522.507986] [<c1086c64>] ? find_get_page+0x1f/0x81 Aug 16 09:33:40 kernel: [32522.521671] [<c10abed5>] ? add_partial+0xe/0x40 Aug 16 09:33:40 kernel: [32522.535285] [<c1086e68>] ? lock_page+0x8/0x1d Aug 16 09:33:40 kernel: [32522.548797] [<c1087432>] ? filemap_fault+0xb5/0x2e6 Aug 16 09:33:40 kernel: [32522.562141] [<c109941c>] ? __do_fault+0x381/0x3b1 Aug 16 09:33:40 kernel: [32522.575441] [<c10d0c30>] ? block_ioctl+0x27/0x2c Aug 16 09:33:40 kernel: [32522.588708] [<c10d0c09>] ? block_ioctl+0x0/0x2c Aug 16 09:33:40 kernel: [32522.601858] [<c10bcd78>] ? vfs_ioctl+0x1c/0x5f Aug 16 09:33:40 kernel: [32522.614917] [<c10bd30c>] ? do_vfs_ioctl+0x4aa/0x4e5 Aug 16 09:33:40 kernel: [32522.627961] [<c10350db>] ? __do_softirq+0x115/0x151 Aug 16 09:33:40 kernel: [32522.640901] [<c126e270>] ? do_page_fault+0x2f1/0x307 Aug 16 09:33:40 kernel: [32522.653803] [<c10bd388>] ? sys_ioctl+0x41/0x58 Aug 16 09:33:40 kernel: [32522.666674] [<c10030fb>] ? sysenter_do_call+0x12/0x28 Then again few messages reset high speed USB device using ehci_hcd and address 2. I have browsed and read similar error reports here and there and I tried: I have upgraded the kernel from v2.6.26-2 to 2.6.32-5, which has not solved the problem. They say, this might a cable problem. I have tried to replace the USB-to-miniUSB cable (that connects external drive with computer) with another one. No changes. Somebody suggests to try another USB port. I have only 4 external USB ports, tried another one with no success. They say to try uhci_hcd. I have unmounted the device, unloaded ehci_hcd and mounted again. The difference was that now in log I get reset full speed USB device using uhci_hcd and address 2 and similar kernel dumps after a while. They say to echo 128 > /sys/block/sdb/device/max_sectors. I tried it with ehci_hcd with no success (note: I have issued this command after the drive was mounted but before using it actively). I have lauched smartmond and checking periodically the output of smartctl: drive temperature is OK, number of bad sectors and uncorrectable errors is 0. Nothing suspicious is reported by S.M.A.R.T. except maybe the following: Aug 16 12:40:12 kernel: [43715.314566] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Aug 16 12:40:13 kernel: [43715.705622] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Of course, I have not tried all combinations of above. But unfortunately, I am run out of cardinal ideas. If anybody can advice something specific about the problem, you are very welcome.

    Read the article

  • Releasing an OLE IStorage file handle in C#

    - by Bernard Darnton
    I'm trying to embed a PDF file into a Word document using the OLE technique described here: http://blogs.msdn.com/brian_jones/archive/2009/07/21/embedding-any-file-type-like-pdf-in-an-open-xml-file.aspx I've tried to implement to C++ code provided in C# so that the whole project's in one place and am almost there except for one roadblock. When I try to feed the generated OLE object binary data into the Word document I get an IOException. IOException: The process cannot access the file 'C:\Wherever\Whatever.pdf.bin' because it is being used by another process. There is a file handle open the .bin file and I don't know how to get rid of it. I don't know a huge amount about COM - I'm winging it here - and I don't know where the file handle is or how to release it. Here's what my C#-ised code looks like. What am I missing? public void ExportOleFile(string oleOutputFileName, string emfOutputFileName) { OLE32.IStorage storage; var result = OLE32.StgCreateStorageEx( oleOutputFileName, OLE32.STGM.STGM_READWRITE | OLE32.STGM.STGM_SHARE_EXCLUSIVE | OLE32.STGM.STGM_CREATE | OLE32.STGM.STGM_TRANSACTED, OLE32.STGFMT.STGFMT_DOCFILE, 0, IntPtr.Zero, IntPtr.Zero, ref OLE32.IID_IStorage, out storage ); var CLSID_NULL = Guid.Empty; OLE32.IOleObject pOle; result = OLE32.OleCreateFromFile( ref CLSID_NULL, _inputFileName, ref OLE32.IID_IOleObject, OLE32.OLERENDER.OLERENDER_NONE, IntPtr.Zero, null, storage, out pOle ); result = OLE32.OleRun(pOle); IntPtr unknownFromOle = Marshal.GetIUnknownForObject(pOle); IntPtr unknownForDataObj; Marshal.QueryInterface(unknownFromOle, ref OLE32.IID_IDataObject, out unknownForDataObj); var pdo = Marshal.GetObjectForIUnknown(unknownForDataObj) as IDataObject; var fetc = new FORMATETC(); fetc.cfFormat = (short)OLE32.CLIPFORMAT.CF_ENHMETAFILE; fetc.dwAspect = DVASPECT.DVASPECT_CONTENT; fetc.lindex = -1; fetc.ptd = IntPtr.Zero; fetc.tymed = TYMED.TYMED_ENHMF; var stgm = new STGMEDIUM(); stgm.unionmember = IntPtr.Zero; stgm.tymed = TYMED.TYMED_ENHMF; pdo.GetData(ref fetc, out stgm); var hemf = GDI32.CopyEnhMetaFile(stgm.unionmember, emfOutputFileName); storage.Commit((int)OLE32.STGC.STGC_DEFAULT); pOle.Close(0); GDI32.DeleteEnhMetaFile(stgm.unionmember); GDI32.DeleteEnhMetaFile(hemf); }

    Read the article

  • Cube project doesn't work because of permissions

    - by sms
    I'm doing "Multidimensional Project" with MS SQL Server 2012 (Server Data Tools - Visual Studio 2010 Shell). I can't run (debug) it. If the data source's impersonation information is set to "use the service account", this error occures: Error 2 Internal error: The operation terminated unsuccessfully. 0 0 Error 3 OLE DB error: OLE DB or ODBC error: Login failed for user 'NT Service\MSSQLServerOLAPService'.; 28000. 0 0 Error 4 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Data Warehouse', Name of 'Data Warehouse'. 0 0 Error 5 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Items', Name of 'Items' was being processed. 0 0 Error 6 Errors in the OLAP storage engine: An error occurred while the 'Id' attribute of the 'Items' dimension from the 'Warehouse_MultidimensionalProject_Cube' database was being processed. 0 0 Error 7 Server: The current operation was cancelled because another operation in the transaction failed. 0 0 I guessed that this account has no premissions but (1) I coudn't even add this account (it seems that it doesn't exist) and (2) how is that even possible for it to not have built-it poremissions? When I'm setting impersonation to "use the credentials of current user" (which is the owner of the data source, btw.), another error occures: Error 2 Internal error: The operation terminated unsuccessfully. 0 0 Error 3 The datasource, 'Data Warehouse', contains an ImpersonationMode that is not supported for processing operations. 0 0 Error 4 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Data Warehouse', Name of 'Data Warehouse'. 0 0 Error 5 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Items', Name of 'Items' was being processed. 0 0 Error 6 Errors in the OLAP storage engine: An error occurred while the 'Id' attribute of the 'Items' dimension from the 'Warehouse_MultidimensionalProject_Cube' database was being processed. 0 0 Error 7 Server: The current operation was cancelled because another operation in the transaction failed. 0 0 Any help?

    Read the article

  • Paperclip: delete attachment and "can't convert nil into String" error

    - by snitko
    I'm using Paperclip and here's what I do in the model to delete attachments: def before_save self.avatar = nil if @delete_avatar == 1.to_s end Works fine unless @delete_avatar flag is set when the user is actually uploading the image (so the model receives both params[:user][:avatar] and params[:user][:delete_avatar]. This results in the following error: TypeError: can't convert nil into String from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `dirname' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `each' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:144:in `save' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:162:in `destroy' from /Work/project/src/app/models/user.rb:72:in `before_save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `callback' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:249:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2538:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/validations.rb:1078:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:723:in `create' I assume it has something to do with the avatar.dirty? value because when it certainly is true when this happens. The question is, how do I totally reset the thing if there are changes to be saved and abort avatar upload when the flag is set?

    Read the article

  • jets3t and Downloading Files from AmazonS3 with Different Name

    - by Gregg
    We're using AmazonS3 for file storage and recently found out that we need to keep some sort of directory structure. Since S3 doesn't allow that, we know we can name the files according to their structure for storage. For example... abc/123/draft.doc What I want to know is if I want to provide a public link to this particular file is there anyway that the file can simply be draft.doc instead of abc/123/draft.doc ?

    Read the article

  • Paperclip: delete attachments and "can't convert nil into String" error

    - by snitko
    I'm using Paperclip and here's what I do in the model to delete attachments: def before_save self.avatar = nil if @delete_avatar == 1.to_s end Works fine unless @delete_avatar flag is set when the user is actually uploading the image (so the model receives both params[:user][:avatar] and params[:user][:delete_avatar]. This results in the following error: TypeError: can't convert nil into String from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `dirname' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `each' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:144:in `save' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:162:in `destroy' from /Work/project/src/app/models/user.rb:72:in `before_save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `callback' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:249:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2538:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/validations.rb:1078:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:723:in `create' I assume it has something to do with the avatar.dirty? value because when it certainly is true when this happens. The question is, how do I totally reset the thing if there are changes to be saved and abort avatar upload when the flag is set?

    Read the article

  • ASP.NET MVC CookieTempDataProvider: any experience?

    - by Igor Brejc
    UPDATE: looks like I've misunderstood what TempData is for and what it isn't. It definitively shouldn't be used to "keep certain session-wide data" as I asked initially (see ASP.NET MVC TempData Is Really RedirectData why). I've modified the question accordingly. Has anyone used CookieTempDataProvider for TempData storage? Are there any caveats to watch out for (apart from keeping the session storage small)? Any issues with using it on Web farms?

    Read the article

  • How to make per- http Request cache in ASP.NET 3.5

    - by Artem
    We using ASP.NET 3.5 (Controls-based apporach) and need to have storage specific for one http request only. Thread-specific cache with keys from session id won't work because threads are supposed to be pooled and therefore I have a chance to have data from some previous request in cache, which is undesireble in my case. I always need to have brand new storage for each request available through whole request. Any ideas how to do it in ASP.NET 3.5?

    Read the article

  • Windows Azure role is state full or not

    - by taimoor-haider
    According to MSDN, an azure service can conatins any number of worker roles. According to my knowledge a worker role can be recycled at any time by Windows Azure Fabric. If it is the true, then: Worker role should be state less OR Worker role should persist its state to Windows Azure storage services. But i want to make a service which conatains client data and do not want to use Azure storage service. How I can accomplish this?

    Read the article

  • How to bind to localStorage change event using jQuery for all browsers?

    - by David Glenn
    How do I bind a function to the HTML5 localStorage change event using jQuery? $(function () { $(window).bind('storage', function (e) { alert('storage changed'); }); localStorage.setItem('a', 'test'); }); I've tried the above but the alert is not showing. Update: It works in Firefox 3.6 but it doesn't work in Chrome 8 or IE 8 so the question should be more 'How to bind to localStorage change event using jQuery for all browsers?'

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • Translate Linq Expression to any existing Query structure?

    - by fredlegrain
    I have some kind of "data engine" between multiple "data consumer" processes and multiple "data storage" sources. I'd like to provide Linq capabilities to the "data consumer" and forward the query to the "data storage". The forwarded query should be some structured query (like, let's say, NHibernate Criteria). Is there any existing structured query library that could allow me to "just" translate a Linq Expression to such a structured query?

    Read the article

  • How do i know if a mysql table is using myISAM or InnoDB Engine

    - by kamal
    I want to confirm if the statement below is indeed true: There is no way to specify a storage engine for a certain database, only for single tables, You can, however, specify a storage engine to be used during one session with: SET storage_engine=InnoDB; so you don't have to specify it for each table. How do i confirm, if indeed all the tables are using InnoDB All the tables were using myISAM.

    Read the article

  • empty base class optimization

    - by FredOverflow
    Two quotes from the C++ standard, §1.8: An object is a region of storage. Base class subobjects may have zero size. I don't think a region of storage can be of size zero. That would mean that some base class subobjects aren't actually objects. Opinions?

    Read the article

  • Laptop battery liftime from Dell specs?

    - by user26535
    Question: When I buy a Dell Laptop, I get the following choice for battery: (Lithium-Ion main battery with X cells and Y Wh [included in price/at additional $] Lithium-Ionen-Hauptakku mit 4 Zellen und 24 Wh [Im Preis enthalten] Lithium-Ionen-Hauptakku mit 9 Zellen und 85 Wh [zuzgl. CHF 120.01] Lithium-Ionen-Hauptakku mit 6 Zellen und 46 Wh [zuzgl. CHF 30.00 I figured that I can calculate that a 86 Wh offers +254% of the 24 Wh lifetime, but... Is there any way to calculate to what battery time this amounts in hours ? I mean how many hours will the 24 Wh last (at normal operation - eg. writing a document - not watching video), else the +254% is a pretty useless number... Also anybody knows whether 4 cells means 4 times 24 Wh, or 24 Wh in total?

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • conditional formatting for subsequent rows or columns

    - by Trailokya Saikia
    I have data in a range of cells (say six columns and one hundred rows). The first four column contains data and the sixth column has a limiting value. For data in every row the limiting value is different. I have one hundred such rows. I am successfully using Conditional formatting (e.g. cells containing data less than limiting value in first five columns are made red) for 1st row. But how to copy this conditional formatting so that it is applicable for entire hundred rows with respective limiting values. I tried with format painter. But it retains the same source cell (here limiting value) for the purpose of conditional formatting in second and subsequent rows. So, now I am required to use conditional formatting for each row separately s

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >