Search Results

Search found 607 results on 25 pages for 'tb'.

Page 3/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to use the FindControl function to find a dynamically generated control?

    - by Abe Miessler
    I have a PlaceHolder control inside of a ListView that I am using to render controls from my code behind. The code below adds the controls: TextBox tb = new TextBox(); tb.Text = quest.Value; tb.ID = quest.ShortName.Replace(" ", ""); ((PlaceHolder)e.Item.FindControl("ph_QuestionInput")).Controls.Add(tb); I am using the following code to retrieve the values that have been entered into the TextBox: foreach (ListViewDataItem di in lv_Questions.Items) { int QuestionId = Convert.ToInt32(((HiddenField)di.FindControl("hf_QuestionId")).Value); Question quest = dc.Questions.Single(q => q.QuestionId == QuestionId); TextBox tb = ((TextBox)di.FindControl(quest.ShortName.Replace(" ",""))); //tb is always null! } But it never finds the control. I've looked at the source code for the page and the control i want has the id: ctl00_cphContentMiddle_lv_Questions_ctrl0_Numberofacres For some reason when I look at the controls in the ListViewDataItem it has the ClientID: ctl00_cphContentMiddle_lv_Questions_ctrl0_ctl00 Why would it be changing Numberofacres to ctl00? Is there any way to work around this? UPDATE: Just to clarify, I am databinding my ListView in the Page_Init event. I then create the controls in the ItemBound event for my ListView. But based on what @Womp and MSDN are saying the controls won't actually be created until after the Load event (which is after the Page_Init event) and therefore are not in ViewState? Does this sound correct? If so am I just SOL when it comes to retrieving the values in my dynamic controls from my OnClick event?

    Read the article

  • How can I rebuild the index in Thunderbird 3.0?

    - by Martin
    Hello, I have just deleted a lot of old messages (about 3,000) from my Thunderbird 3.0 profile. When I now use the new search feature (search all messages), TB still finds the deleted ones. I deleted them this way: I moved the messages to an own "archive" folder (not the built-in archive feature). Then I stopped TB and moved the archive files and folders to a different place on my file system. Then restarted TB. I archive my messages this way for years now. So, it seems that Thunderbird does not notice the deletion of my messages, thus the index is not updated. How can I tell TB to instantly rebuild the index?

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • Dead drive in LVM/XFS configuration

    - by Freddie Witherden
    I had three drives in LVM: a 2 TB drive and two 1 TB drives (added later). One of the 1 TB drives -- I believe the third one -- has died. Spanning all three drives was an XFS partition. Reading: http://www.novell.com/coolsolutions/appnote/19386.html I see that one way of handling this is to replace the dead drive and copy the metadata over. However, I am currently not in possession of a 1 TB drive and can not readily acquire one. Given this, what are my options? There was nothing important on the drives (if there was I would have them in RAID 1) but I would not mind attempting a recovery. Is there a simple way of forcing LVM to go with just two drives and NUL out anything else? (So that fsck can do its thing.)

    Read the article

  • problems using evolution Contacts with an DavMail LDAP Proxy for an Exchange server

    - by WegDamit
    i have an davmail proxy setup for accessing an Exchnage 200x server. eMail works fine in Thunderbird and Evolution (IMAP...) LDAP Contacts/Address Book works in TB, but not on Evolution. It seems that Evolution does not try the given credentials. The entered LDAP Auth is never send to the DaVMail Proxy. anonymous access to ou=people forbidden davmail.ui.tray.DavGatewayTray.displayMessage(DavGatewayTray.java:96) It the same conf for TB and in Evolution so i looks like an issue with Evolution to me. Does it take some different cponfig than TB for the credentials? Anybody got this conf workin an can give me some hints? Thanks, WegDamit

    Read the article

  • Rotating text using CSS

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Goal: Rotating text using css only. How: Surprisingly IE supports this feature rather well. You could use property filters in IE, but since this is only supported on IE browsers, I would not recommend it. CSS3, still in proposal state, has a "writing-mode" property for doing this. It has been part of IE's browser engine since IE5.5. Now that it is part of the CSS3 draft specification, would be the best way to implement this going forward. Webkit based browsers; Firefox 3.5+, Opera 11 and IE9 implement this feature differently by utilizing the transform property. Without using third-party JavaScript or CSS properties, we can use the CSS3 "writing-mode" property, supported from IE5.5 up to IE8, the latter adding addition formatting options through -ms extensions. <style type="text/css"> .rightToLeft{ writing-mode: tb-rl; } </style> <p class="rightToLeft">This is my text</p> This will rotate the text 90 degrees, starting from the right to the left. Here are all the options: ·         lr-tb – Default value, left to right, top to bottom ·         rl-tb – Right to left, top to bottom ·         tb-rl – Vertically; top to bottom, right to left ·         bt-rl – Vertically; bottom to top, right to left ·         tb-lr – Available in IE8+: -ms-writing-mode; top to bottom, left to right ·         bt-lr – Bottom to top, left to right ·         lr-bt – Left to right, bottom to top What about Firefox, Safari, etc.? The following techniques need to be used on Webkit browsers like Firefox, Opera 11, Google Chrome and IE9. These browsers require their proprietary vendor extensions: -moz-, -webkit-, -o- and -ms-. -webkit-transform: rotate(90deg);    -moz-transform: rotate(90deg); -ms-transform: rotate(90deg); -o-transform: rotate(90deg); transform: rotate(90deg);

    Read the article

  • Programmatically creating a toolbar in WPF

    - by rwallace
    I'm trying to create a simple toolbar in WPF, but the toolbar shows up with no corresponding buttons on it, just a very thin blank white strip. Any idea what I'm doing wrong, or what the recommended procedure is? Relevant code fragments so far: var tb = new ToolBar(); var b = new Button(); b.Command = comback; Image myImage = new Image(); myImage.Source = new BitmapImage(new Uri("back.png", UriKind.Relative)); b.Content = myImage; tb.Items.Add(b); var p = new DockPanel(); //DockPanel.SetDock(mainmenu, Dock.Top); DockPanel.SetDock(tb, Dock.Top); DockPanel.SetDock(sb, Dock.Bottom); //p.Children.Add(mainmenu); p.Children.Add(tb); p.Children.Add(sb); Content = p;

    Read the article

  • How to convert this code-based WPF tooltip to Silverlight?

    - by Edward Tanguay
    The following ToolTip code works in WPF. I'm trying to get it to work in Silverlight. But it gives me these errors: TextBlock does not contain a definition for ToolTip. Cursors does not contain a definition for Help. ToolTipService does not contain a definition for SetInitialShowDelay. How can I get this to work in Silverlight? using System.Windows; using System.Windows.Controls; using System.Windows.Input; using System.Windows.Media; namespace TestHover29282 { public partial class Window1 : Window { public Window1() { InitializeComponent(); AddCustomer("Jim Smith"); AddCustomer("Joe Jones"); AddCustomer("Angie Jones"); AddCustomer("Josh Smith"); } void AddCustomer(string name) { TextBlock tb = new TextBlock(); tb.Text = name; ToolTip tt = new ToolTip(); tt.Content = "This is some info on " + name + "."; tb.ToolTip = tt; tt.Cursor = Cursors.Help; ToolTipService.SetInitialShowDelay(tb, 0); MainStackPanel.Children.Add(tb); } } }

    Read the article

  • Chain of DataBinding

    - by Neir0
    Hello I am trying to do follow DataBinding Property -> DependencyProperty -> Property But i have trouble. For example, We have simple class with two properties implements INotifyPropertyChanged: public class MyClass : INotifyPropertyChanged { private string _num1; public string Num1 { get { return _num1; } set { _num1 = value; OnPropertyChanged("Num1"); } } private string _num2; public string Num2 { get { return _num2; } set { _num2 = value; OnPropertyChanged("Num2"); } } public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string e) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs(e)); } } And TextBlock declared in xaml: <TextBlock Name="tb" FontSize="20" Foreground="Red" Text="qwerqwerwqer" /> Now lets trying to bind Num1 to tb.Text: private MyClass _myClass = new MyClass(); public MainWindow() { InitializeComponent(); Binding binding1 = new Binding("Num1") { Source = _myClass, Mode = BindingMode.OneWay }; Binding binding2 = new Binding("Num2") { Source = _myClass, Mode = BindingMode.TwoWay }; tb.SetBinding(TextBlock.TextProperty, binding1); //tb.SetBinding(TextBlock.TextProperty, binding2); var timer = new Timer(500) {Enabled = true,}; timer.Elapsed += (sender, args) => _myClass.Num1 += "a"; timer.Start(); } It works well. But if we uncomment this string tb.SetBinding(TextBlock.TextProperty, binding2); then TextBlock display nothing. DataBinding doesn't work! How can i to do what i want?

    Read the article

  • Getting filtered results with subquery

    - by josepv
    I have a table with something like the following: ID Name Color 1 Bob Blue 2 John Yellow 1 Bob Green 3 Sara Red 3 Sara Green What I would like to do is return a filtered list of results whereby the following data is returned: ID Name Color 1 Bob Blue 2 John Yellow 3 Sara Red i.e. I would like to return 1 row per user. (I do not mind which row is returned for the particular user - I just need that the [ID] is unique.) I have something already that works but is really slow where I create a temp table adding all the ID's and then using a "OUTER APPLY" selecting the top 1 from the same table, i.e. CREATE TABLE #tb ( [ID] [int] ) INSERT INTO #tb select distinct [ID] from MyTable select T1.[ID], T2.[Name], T2.Color from #tb T1 OUTER APPLY ( SELECT TOP 1 * FROM MyTable T2 WHERE T2.[ID] = T1.[ID] ) AS V2 DROP TABLE #tb Can somebody suggest how I may improve it? Thanks

    Read the article

  • JSONP context problem

    - by PoweRoy
    I'm using a javascript autocomplete () in a greasemonkey script. On itself it works correctly but I wan't to add JSONP because I want the data from another domain. The code (snippet): function autosuggest(url) { this.suggest_url = url; this.keywords = []; return this.construct(); }; autosuggest.prototype = { construct: function() { return this; }, preSuggest: function() { this.CreateJSONPRequest(this.suggest_url + "foo"); }, CreateJSONPRequest: function(url) { var headID = document.getElementsByTagName("head")[0]; var newScript = document.createElement('script'); newScript.type = 'text/javascript'; newScript.src = url +'&callback=autosuggest.prototype.JSONCallback'; //newScript.async = true; newScript.onload = newScript.onreadystatechange = function() { if (newScript.readyState === "loaded" || newScript.readyState === "complete") { //remove it again newScript.onload = newScript.onreadystatechange = null; if (newScript && newScript.parentNode) { newScript.parentNode.removeChild(newScript); } } } headID.appendChild(newScript); }, JSONCallback: function(data) { if(data) { this.keywords = data; this.suggest(); } }, suggest: function() { //use this.keywords } }; //Add suggestion box to textboxes window.opera.addEventListener('AfterEvent.load', function (e) { var textboxes = document.getElementsByTagName('input'); for (var i = 0; i < textboxes.length; i++) { var tb = textboxes[i]; if (tb.type == 'text') { if (tb.autocomplete == undefined || tb.autocomplete == '' || tb.autocomplete == 'on') { //we handle autosuggestion tb.setAttribute('autocomplete','off'); var obj1 = new autosuggest("http://test.php?q="); } } } }, false); I removed not relevant code. Now when 'preSuggest' is called, it add a script to the header and circumvent the crossdomain problem. Now when the data is received back 'JSONcallback' is called. I can use the data, but when 'Suggest' is I can't use the this.keywords array or this.suggest_url. I think this is because 'JSONcallback' and 'Suggest' are called in a different context. How can I get this working?

    Read the article

  • Complex Join - involving date ranges and sum...

    - by calumbrodie
    I have two tables that I need to join... I want to join table1 and table2 on 'id' - however in table two id is not unique. I only want one value returned for table two, and this value represents the sum of a column called 'total_sold' - within a specified date range (say one month).. SELECT ta.id, tb.total_sold as total_sold_this_week FROM table_a as ta LEFT JOIN table_b as tb ON ta.id=tb.id AND tb.date_sold BETWEEN ADDDATE(NOW(),INTERVAL -3 WEEK) AND NOW() this works but does not SUM the rows - only returning one row for each id... how do I get the sum from table b instead of only one row??? Please criticise if format of question could use more work - I can rewrite and provide sample data if required - this is a trivialised version of a much larger problem. -Thanks

    Read the article

  • How to make a tooltip appear immediately in Silverlight?

    - by Edward Tanguay
    In WPF, I get a tooltip to appear immediately like this: TextBlock tb = new TextBlock(); tb.Text = name; ToolTip tt = new ToolTip(); tt.Content = "This is some info on " + name + "."; tb.ToolTip = tt; tt.Cursor = Cursors.Help; ToolTipService.SetInitialShowDelay(tb, 0); This makes the user experience better since if the user wants to look at the tooltips of five items on the page, he doesn't have to wait that long second for each one. But since Silverlight does not have SetInitialShowDelay, what is a workaround to make the tooltip appear immediately?

    Read the article

  • Possible to fire asp.net validation from jQuery?

    - by Abe Miessler
    I have a form with several text boxes on it. I only want to accept floats, but it is likely that users will enter a dollar sign. I'm using the following code to remove dollar signs and validate the content: jQuery: $("#<%= tb.ClientID %>").change(function() { var ctrl = $("#<%= tb.ClientID %>"); ctrl.val(ctrl.val().replace('$','')) }); asp.net validation: <asp:CompareValidator ID="CompareValidator4" runat="server" Type="Double" ControlToValidate="tb" Operator="DataTypeCheck" ValidationGroup="vld_Page" ErrorMessage="Some error" /> My problem is that when someone enters a dollar sign in the TextBox "tb" and changes focus the validation happens first and THEN the jQuery removes the dollar sign. Is it possible to have the jQuery run first or to force the validation to run again after the jQuery executes?

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Openpgp does not work in my Thunderbird-Installation

    - by zerozero
    Hello community, as mentioned above - i encountered serious troubles. here are the versions of the related software: SuSE 11.2, Thunderbird v.3.1.6, released October 27, 2010, Firefox 3.6 v3.6.12 I created an installations with an own partition both for the user and for the TB-Mails. For a new installation on another hardware i took these partitions. When i wanted to read the emails, i got an error-message like this: The GPG-agent for your GnuPG-version 2.0.12 couldn´t get started Further i got an error message for the access on services of enigmail. The file jar:file:///usr/lib/mozilla/extensions/{3550f703-e582-4d05-9a08-453d09bdfdc6}/{847b3a00-7ab1-11d4-8f02-006008948af5}/chrome/enigmail.jar!/locale/de-DE/enigmail/help/initError.html couldn´t get found. I found out, that this path doesn´t come from TB nor from firefox, but from enigmail. I installed several (un-)pack-programs. the only effect was, that in the menu of TB the entry for OpenPGP appeared in the TB-Menu. The errors as described above repeat at every try to read an email. I deleted and re-installed enigmail, but the errors dont´t disappear. What can i do to get rid these error-messages ? Thanks in advance

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • Windows Service suddenly doing nothing

    - by TB
    Hi, My windows service is using a Thread (not a timer) which is always looping and sleeps for 1 second every loop using : evet.WaitOne(interval); When I start the service it works fine and I can see in the task manager that it is running, consuming and releasing memory, consuming processor ... etc that is all normal, but after a while (random amount of time) the service simply stops!! it is still there in the task manager but it is not consuming any processor work now and its consumption to the memory is not changing. it simply (died but still there in the task manager like a Zombie). I know that many exceptions might have happened during running the service (it is really doing many things) but all those exceptions are handled in Try catch blocks, so why is my "always looping" thread stops ??? This thread also logs every time he loops, when he is freezig in this way he is not logging anything (of course)

    Read the article

  • Gmail zend imap - latency when fetching messageids

    - by T.B Ygg
    i have this code to fetch emails from gmail using imap with the zend framework. i go back 2 days in my search (as i do not want all messages) all works well but it takes forever to load the messages and i need to do this for 5+ users, it seems like the search goes through the entire gmail message archive in getting the newest ones. my code looks like this: $dato = date('j-F-Y', strtotime($Date. ' - 2 days')); $dato = "SINCE ".$dato; $messageids = $imap->search(array($dato)); any ideas on how to make zend work faster?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >