Search Results

Search found 9923 results on 397 pages for 'state'.

Page 111/397 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • "Initializing - Busy - Stopping" LOOP issue in Azure deployement

    - by Kushal Waikar
    Hi folks, I am trying to deploy an azure cloud application on Windows Azure. Application specifications are -- It has one WebRole - ASP.Net MVC Application (ASP.Net charting control is used in this MVC application) It does not contain any worker role. Third party references are set with property "copy Local" to "true"(MVC,ASP Charting control & ASP Provider DLLs) There is no DiagnosticsConnectionString in service configuration file It uses ASP provider for session state management. This application runs successfully on local dev fabric but when I try to deploy it on Windows Azure it gets stuck in a loop with status being changed between Initializing, Busy, Stopping states. It never goes into READY state. It seems that there are no ERROR logs for conveying the deployment issues to user. So is there any way to diagnose deployment issues ? Is there any way to get deployment ERROR logs ? Any kind of help will be appreciated. Thanks, Kushal

    Read the article

  • Two versions of same asp.net app using same server as stateserver - bad?

    - by MGOwen
    We have 2 production web servers for our web app, load balanced to handle lots of traffic. We also have a similar setup for testing. Test pool: [TEST 1]---[TEST 2] Prod pool: [PROD 1]---[PROD 2] When comparing the Web.Config of the app versions (test vs live) I discovered something surprising: both pools have the same value for stateConnectionString. If I understand right, this means they are using the same state server: <sessionState mode="StateServer" stateConnectionString="tcpip=123.123.123.123:42424" cookieless="false" timeout="30"/> Is this a problem? (How does the state server not confuse the two pools)? I was having odd only-sometimes slowdown/errors on the test server, that's why I was looking at this in the first place, but the prod pool runs fine...

    Read the article

  • How do people handle working with Code Names for their projects?

    - by Mark
    Hi All, Recently we started using some code names for several different types of prototype applications all following a theme. This made things a little more fun and was a great idea. The problem is that Im not too sure how people deal with migrating a codebase from "codename" state into version 1.0 state which may have a proper name... not something that a client really shouldnt see :) We are using Visual Studio at the moment, and I can see that you can change the assembly name, but there are references to the namespaces, etc... that would really be a large change to make. Do people both changing things like namespaces before the v1.0 release?

    Read the article

  • try catch finally

    - by gligom
    Maby this is simple for you, but for me is not. I have this code: Private int InsertData() { int rezultat = 0; try { if (sqlconn.State != ConnectionState.Open) { sqlconn.Open(); } rezultat = (int)cmd.ExecuteScalar(); } catch (Exception ex) { lblMesaje.Text = "Eroare: " + ex.Message.ToString(); } finally { if (sqlconn.State != ConnectionState.Closed) { sqlconn.Close(); } } return rezultat; } Is just for inserting a new record in a table. Even if this throw an error "Specified cast is not valid." "rezultat=(int)cmd.ExecuteScalar();" - the code is executed and the row is inserted in the database, and the execution continues. Why it continues? Maby i don't understand the try catch finally yet Smile | :) Thank you!

    Read the article

  • MVC3 Razor DropDownListFor Enums

    - by jordan.baucke
    Trying to get my project updated to MVC3, something I just can't find: I have a simple datatype of ENUMS: public enum States() { AL,AK,AZ,...WY } Which I want to use as a DropDown/SelectList in my view of a model that contains this datatype: public class FormModel() { public States State {get; set;} } Pretty straight forward: when I go to use the auto-generate view for this partial class, it ignores this type. I need a simple select list that sets the value of the enum as the selected item when I hit submit and process via my AJAX - JSON POST Method. And than the view (???!): <div class="editor-field"> @Html.DropDownListFor(model => model.State, model => model.States) </div> thanks in advance for the advice!

    Read the article

  • problem in saving drag&drop object in database

    - by Mac Taylor
    hey guys i made a script inspired by wordpress widgets'page to drag&drop blocks of my sites but problem is in saving the position after droping this is jquery code , i used to do the above target : <script type="text/javascript" >$(function(){ $('.widget') .each(function(){ $(this).hover(function(){ $(this).find('h4').addClass('collapse'); }, function(){ $(this).find('h4').removeClass('collapse'); }) .find('h4').hover(function(){ $(this).find('.in-widget-title').css('visibility', 'visible'); }, function(){ $(this).find('.in-widget-title').css('visibility', 'hidden'); }) .click(function(){ $(this).siblings('.widget-inside').toggle(); //Save state on change of collapse state of panel updateWidgetData(); }) .end() .find('.in-widget-title').css('visibility', 'hidden'); }); $('.column').sortable({ connectWith: '.column', handle: 'h4', cursor: 'move', placeholder: 'placeholder', forcePlaceholderSize: true, opacity: 0.4, start: function(event, ui){ //Firefox, Safari/Chrome fire click event after drag is complete, fix for that if($.browser.mozilla || $.browser.safari) $(ui.item).find('.widget-inside').toggle(); }, stop: function(event, ui){ ui.item.css({'top':'0','left':'0'}); //Opera fix if(!$.browser.mozilla && !$.browser.safari) updateWidgetData(); } }) .disableSelection(); }); function updateWidgetData(){ var items=[]; $('.column').each(function(){ var columnId=$(this).attr('id'); $('.widget', this).each(function(i){ var collapsed=0; if($(this).find('.widget-inside').css('display')=="none") collapsed=1; //Create Item object for current panel var item={ id: $(this).attr('id'), collapsed: collapsed, order : i, column: columnId }; //Push item object into items array items.push(item); }); }); //Assign items array to sortorder JSON variable var sortorder={ items: items }; //Pass sortorder variable to server using ajax to save state $.post('updatePanels.php', 'data='+$.toJSON(sortorder), function(response){ if(response=="success") $("#console").html('<div class="success">Saved</div>').hide().fadeIn(1000); setTimeout(function(){ $('#console').fadeOut(1000); }, 2000); }); } </script> and a simple php file but problem is its not sending data to target php file is there anything wrong with my code ?

    Read the article

  • Dynamic context menus for a BlackBerry CLDC Application

    - by Jacob Tabak
    In my custom field in a BlackBerry CLDC Application, I want to display a specific context menu based on the current state of the field. My original idea was to do something like this: protected void makeContextMenu(ContextMenu contextMenu) { if (isPaused()) contextMenu.addItem(resumeMenuItem); else contextMenu.addItem(pauseMenuItem); } However, the state of the field doesn't seem to be affecting the items on the context menu. I'm assuming that the context menu only gets "made" once during the life of the field. Is there a way to do what I'm trying to do?

    Read the article

  • question about book example - Java Concurrency in Practice, Listing 4.12

    - by mike
    Hi, I am working through an example in Java Concurrency in Practice and am not understanding why a concurrent-safe container is necessary in the following code. I'm not seeing how the container "locations" 's state could be modified after construction; so since it is published through an 'unmodifiableMap' wrapper, it appears to me that an ordinary HashMap would suffice. EG, it is accessed concurrently, but the state of the map is only accessed by readers, no writers. The value fields in the map are syncronized via delegation to the 'SafePoint' class, so while the points are mutable, the keys for the hash, and their associated values (references to SafePoint instances) in the map never change. I think my confusion is based on what precisely the state of the collection is in the problem. Thanks!! -Mike Listing 4.12, Java Concurrency in Practice, (this listing available as .java here, and also in chapter form via google) /////////////begin code @ThreadSafe public class PublishingVehicleTracker { private final Map<String, SafePoint> locations; private final Map<String, SafePoint> unmodifiableMap; public PublishingVehicleTracker( Map<String, SafePoint> locations) { this.locations = new ConcurrentHashMap<String, SafePoint>(locations); this.unmodifiableMap = Collections.unmodifiableMap(this.locations); } public Map<String, SafePoint> getLocations() { return unmodifiableMap; } public SafePoint getLocation(String id) { return locations.get(id); } public void setLocation(String id, int x, int y) { if (!locations.containsKey(id)) throw new IllegalArgumentException( "invalid vehicle name: " + id); locations.get(id).set(x, y); } } // monitor protected helper-class @ThreadSafe public class SafePoint { @GuardedBy("this") private int x, y; private SafePoint(int[] a) { this(a[0], a[1]); } public SafePoint(SafePoint p) { this(p.get()); } public SafePoint(int x, int y) { this.x = x; this.y = y; } public synchronized int[] get() { return new int[] { x, y }; } public synchronized void set(int x, int y) { this.x = x; this.y = y; } } ///////////end code

    Read the article

  • Python: Converting a tuple to a string with 'err'

    - by skylarking
    Given this : import os import subprocess def check_server(): cl = subprocess.Popen(["nmap","10.7.1.71"], stdout=subprocess.PIPE) result = cl.communicate() print result check_server() check_server() returns this tuple: ('\nStarting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:26 EDT\nInteresting ports on 10.7.1.71:\nNot shown: 1711 closed ports\nPORT STATE SERVICE\n21/tcp open ftp\n22/tcp open ssh\n80/tcp open http\n\nNmap done: 1 IP address (1 host up) scanned in 0.293 seconds\n', None) Changing the second line in the method to result, err = cl.communicate() results in check_server() returning : Starting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:27 EDT Interesting ports on 10.7.1.71: Not shown: 1711 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.319 seconds Looks to be the case that the tuple is converted to a string, and the \n's are being stripped.... but how? What is 'err' and what exactly is it doing?

    Read the article

  • What's a reasonable number of rows and tables to be able to join in MySQL?

    - by Philip Brocoum
    I have one table that maps locations to postal codes. For example, New York State has about 2000 postal codes. I have another table that maps mail to the postal codes it was sent to, but this table has about 5 million rows. I want to find all the mail that was sent to New York State, which seems simple enough, but the query is unbelievably slow. I haven't been able to even wait long enough for it to finish. Is the problem that there are 5 million rows? I can't help but think that 5 million shouldn't be such a large number for a computer these days... Oh, and everything is indexed. Is SQL just not designed to handle such large joins?

    Read the article

  • Why won't this SQL CAST work?

    - by Kev
    I have a nvarchar(50) column in a SQL Server 2000 table defined as follows: TaskID nvarchar(50) NULL I need to fill this column with some random SQL Unique Identifiers (I am unable to change the column type to uniqueidentifier). I tried this: UPDATE TaskData SET TaskID = CAST(NEWID() AS nvarchar) but I got the following error: Msg 8115, Level 16, State 2, Line 1 Arithmetic overflow error converting expression to data type nvarchar. I also tried: UPDATE TaskData SET TaskID = CAST(NEWID() AS nvarchar(50)) but then got this error: Msg 8152, Level 16, State 6, Line 1 String or binary data would be truncated. I don't understand why this doesn't work but this does: DECLARE @TaskID nvarchar(50) SET @TaskID = CAST(NEW() AS nvarchar(50)) I also tried CONVERT(nvarchar, NEWID()) and CONVERT(nvarchar(50), NEWID()) but got the same errors.

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • Android bluetooth socket error

    - by ashwini
    I am using backport bluetooth api on android 1.6. I am using Google Bluetooth Chat sample app for testing. The app works fine in normal scenarios. In a scenario, when I try to connect to paired device which is in off state, I get following error. 01-04 09:00:11.629: ERROR/BluetoothEventLoop.cpp(84): onGetRemoteServiceChannelResult: D-Bus error: org.bluez.Error.ConnectionAttemptFailed (Host is down) 01-04 09:00:11.729: DEBUG/dalvikvm(128): GC freed 4535 objects / 256008 bytes in 296ms 01-04 09:00:21.880: ERROR/bluetooth_RfcommSocket.cpp(1433): connect error: Host is down (112) But it sets the state as connected. The app is unable to catch the exception. Why does it happen? Or is it the case with backport api? Any help is appreciated as I am struggling a lot to get things run fine.

    Read the article

  • Multiple Progress Bar States in WPF

    - by kbo206
    I'm currently developing a WPF application in C# and I want to have a progress bar control that can be in a "paused" state as well as an "error" state. Much like this: http://wyday.com/windows-7-progress-bar/ Unfortunately, that's a Windows Forms control and implementing it via a Windows Forms Host proved to be incompatible. My question is, how can I go about accomplishing a similar effect in WPF? Is it possible to make multiple "states" of a progress bar? Is this kind of operation in WPF but I'm just looking over it? I'm mainly talking about the progress bar itself here, I'm pretty sure I know how to achieve this in the taskbar. All help is appreciated, especially code examples. Thanks!

    Read the article

  • Chaning coding style due to Android GC performance, how far is too far?

    - by Benju
    I keep hearing that Android applications should try to limit the number of objects created in order to reduce the workload on the garbage collector. It makes sense that you may not want to created massive numbers of objects to track on a limited memory footprint, for example on a traditional server application created 100,000 objects within a few seconds would not be unheard of. The problem is how far should I take this? I've seen tons of examples of Android applications relying on static state in order supposedly "speed things up". Does increasing the number of instances that need to be garbage collected from dozens to hundreds really make that big of a difference? I can imagine changing my coding style to now created hundreds of thousands of objects like you might have on a full-blown Java-EE server but relying on a bunch of static state to (supposedly) reduce the number of objects to be garbage collected seems odd. How much is it really necessary to change your coding style in order to create performance Android apps?

    Read the article

  • Android Apps: What is the recommended targetSdk for broadest appeal?

    - by johnrock
    I have an Android app that only needs internet access and would like to target API level 3 (1.5) to reach the broadest handset base. However, it appears that targeting API level 3 implicitly requires two additional permissions that are visible to users: modify sd card, and read phone state. See: http://stackoverflow.com/questions/1747178/android-permissions-phone-calls-read-phone-state-and-identity) So the connundrum, do I target API level 4 and turn away users running 1.5, or do I target API level 3 and turn away users who are upset that my app is requesting so many permissions that it shouldn't need? What is the smartest thing to do here? Are there really a lot of users still limited to API level 3? I appreciate any wisdom offered! Thanks!

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • Video Streaming

    - by Josephe
    First I need to state that I am not the developer so I will state the issue in business terms but please respond with a technical solution as I will be passing on your posts to the developer. We are building a web based application in SilverLight and WPF that consists of communication between a customer service rep and a customer. The application will inlcude a webcam that will provide the customer with the ability to see the customer service rep and the rep will also be able to see an image of themselves but will not be able to see the customer. Similar functionality to a live web chat with video component. There could potentially be bandwidth issues and we do not want the customer to have to download anything. Any recommendations on how to do this?

    Read the article

  • return false for parent javascript function

    - by jeerose
    $("#purchaser_contact").live('submit', function(){ $.ajax({ type: "POST", url: 'ajax/contactSearch.php', data: ({ fname: $("#fname").val(), lname: $("#lname").val(), city: $("#city").val(), state: $("#state").val() }), success: function(d) { var obj = JSON.parse( d ); if(obj.result != 0){ $("#contactSearch").remove(); $("#button-wrapper").before('<div id="contactSearch">' + obj.result + '</div>'); $("#contactSearch").slideToggle('slow'); //return false from submit!! }); }); I know there are other posts on this but I couldn't figure out how to apply them properly to my situation without making it messy. How do I return false on the submit event to prevent the form from submitting if I'm within the $.ajax and success functions? Thanks.

    Read the article

  • How should I deal with persisting an Android WebView if my activity gets destroyed while showing ano

    - by Shaun Crampton
    Hi, I'm building an Android application that's based around an enhanced WebView (based on PhoneGap). I've enhanced the WebView so that from JavaScript running inside you can invoke the native contact picker to choose a phone number (which may be supplied by Facebook for example). The problem I have is that the native contact picker runs in an activity in another process and the Android docs say that while another activity is open my activity may get destroyed due to memory constraints. I haven't actually seen this happen in my application but if it did then I'm guessing my WebView's state would be destroyed and the code that was waiting for the picked contact would be terminated. It seems a bit crazy that the activity requesting a contact could be destroyed while the contact picker is open. Does anyone know if that does indeed happen? Is there a way to persist the state of the WebView if it does? Thanks, -Shaun

    Read the article

  • GAE more than 3 attributes to filter?

    - by Vik
    Hie I am using GAE jdoql and wrote query like: Query query = pm.newQuery(BloodDonor.class); query.setFilter(" state == :stateName && district == :distName &&" + " city == :cityName && bloodGroup == :blood"); @SuppressWarnings("unchecked") List<BloodDonor> donors = (List<BloodDonor>) query.execute(state.toLowerCase(), district.toLowerCase(), city.toLowerCase(), bloodGroup.toLowerCase()); This doesnt work as execute method does not support more than 3 parameters. So how to pass more than 3

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >