Search Results

Search found 28880 results on 1156 pages for 'check disk'.

Page 283/1156 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • VB.net httpwebrequest For loop hangs every 10 or so iterations

    - by user574632
    Hello, I am trying to loop through an array and perform an httpwebrequest in each iteration. The code seems to work, however it pauses for a while (eg alot longer than the set timeout. Tried setting that to 100 to check and it still pauses) after every 10 or so iterations, then carries on working. Here is what i have so far: For i As Integer = 0 To numberOfProxies - 1 Try 'create request to a proxyJudge php page using proxy Dim request As HttpWebRequest = HttpWebRequest.Create("http://www.pr0.net/deny/azenv.php") request.Proxy = New Net.WebProxy(proxies(i)) 'select the current proxie from the proxies array request.Timeout = 10000 'set timeout Dim response As HttpWebResponse = request.GetResponse() Dim sr As StreamReader = New StreamReader(response.GetResponseStream()) Dim pageSourceCode As String = sr.ReadToEnd() 'check the downloaded source for certain phrases, each identifies a type of proxy 'HTTP_X_FORWARDED_FOR identifies a transparent proxy If pageSourceCode.Contains("HTTP_X_FORWARDED_FOR") Then 'delegate method for cross thread safe UpdateListbox(ListBox3, proxies(i)) ElseIf pageSourceCode.Contains("HTTP_VIA") Then UpdateListbox(ListBox2, proxies(i)) Else UpdateListbox(ListBox1, proxies(i)) End If Catch ex As Exception 'MessageBox.Show(ex.ToString) used in testing UpdateListbox(ListBox4, proxies(i)) End Try completedProxyCheck += 1 lblTotalProxiesChecked.CustomInvoke(Sub(l) l.Text = completedProxyCheck) Next I have searched all over this site and via google, and most responses to this type of question say the response must be closed. I have tried a using block, eg: Using response As HttpWebResponse = request.GetResponse() Using sr As StreamReader = New StreamReader(response.GetResponseStream()) Dim pageSourceCode As String = sr.ReadToEnd() 'check the downloaded source for certain phrases, each identifies a type of proxy 'HTTP_X_FORWARDED_FOR identifies a transparent proxy If pageSourceCode.Contains("HTTP_X_FORWARDED_FOR") Then 'delegate method for cross thread safe UpdateListbox(ListBox3, proxies(i)) ElseIf pageSourceCode.Contains("HTTP_VIA") Then UpdateListbox(ListBox2, proxies(i)) Else UpdateListbox(ListBox1, proxies(i)) End If End Using End Using And it makes no difference (though i may have implemented it wrong) As you can tell im very new to VB or any OOP so its probably a simple problem but i cant work it out. Any suggestions or just tips on how to diagnose these types of problems would be really appreciated.

    Read the article

  • php mysql parallel array checkboxes

    - by gramware
    I have an array of checkboxes that I edit at once to set up a 'tinyint' field. the problem comes in when i uncheck the checkbox and post the vales to mysql. since it posts an array of checkboxes and another parallel array of values to edit, unchecking a checkbox results in the 0 value been ignored by PHP_POST and hence the checkbox array will be less by the number of unchecked values in the form while the array to be edited will have all the records in the form. here is the submit code while($row=mysql_fetch_array($result)) { $checked = ($row[active]==1) ? 'checked="checked"' : ''; ... echo "<input type='hidden' name='TrID[]' value='$TrID'>"; echo "<input type='checkbox' name='active1[]' value='$row[active]''$checked' >"; ... and the processing php script $userid = ($_POST['TrID']); $checked= ($_POST['active']); $i=0; foreach ($userid as $usid) { if ($checked[$i]==1){ $check = 1; } else{ $check = 0; } $qry1 ="UPDATE `epapers`.`clientelle` SET `active` = '$check' WHERE `clientelle`.`user_id` = '$usid' "; $result = mysql_query($qry1); $i++; }

    Read the article

  • JavaFX: JNLP file error

    - by jeff porter
    Hello everyone, I'm getting the following error when I upload a JavaFX app to a website, but I don't get it locally. I'm presuming that I'm missing something like the 'codebase' tag, but I'm not sure where it goes, can anyone help me out please? Java Console error: exception: JNLP file error: iShout_Foxpro_browser.jnlp. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct.. java.io.FileNotFoundException: JNLP file error: iShout_Foxpro_browser.jnlp. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct. at sun.plugin2.applet.JNLP2Manager.loadJarFiles(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception: java.io.FileNotFoundException: JNLP file error: iShout_Foxpro_browser.jnlp. Please make sure the file exists and check if "codebase" and "href" in the JNLP file are correct. HTML file source... <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>app_one</title> </head> <body> <script src="http://dl.javafx.com/1.3/dtfx.js"></script> <script> javafx( { archive: "app_one.jar", draggable: true, width: 480, height: 320, code: "app.Main", name: "app_one" } );

    Read the article

  • Map the physical file path in asp.net mvc

    - by rmassart
    Hi, I am trying to read an XSLT file from disk in my ASP.Net MVC controller. What I am doing is the following: string filepath = HttpContext.Request.PhysicalApplicationPath; filepath += "/Content/Xsl/pubmed.xslt"; string xsl = System.IO.File.ReadAllText(filepath); However, half way down this thread on forums.asp.net there is the following quote HttpContext.Current is evil and if you use it anywhere in your mvc app you are doing something wrong because you do not need it. Whilst I am not using "Current", I am wondering what is the best way to determine the absolute physical path of a file in MVC? For some reason (I don't know why!) HttpContext doesn't feel right for me. Is there a better (or recommended/best practice) way of reading files from disk in ASP.Net MVC? Thanks for your help, Robin

    Read the article

  • use hg to synchronize my project between my two computer

    - by hguser
    Hi: I have two computer : the desktop in my company and the portable computer in my home. Now I want to use the hg to synchronize the project between them using a "USB removable disk". So I wonder how to implement it? THe pro in my desktop is : D:\work\mypro. I use the following command to init it: hg init Then I connect to the USB disk whose volume label is "H",and get a clone using: cd H: hg init hg clone D:\work\mypro mypro-usb ANd in my portable computer I use: cd D: hg clone H:\mypro-usb mypro-home However I do not know how to do if I modify some files(remove or add and modify) in the mypro-home,how to make the mypro-usb changed synchronizely,also I want the mypro in my desktop synchronizely. How to do it?

    Read the article

  • Passing parameters to eventListener function

    - by bryan sammon
    I have this function check(e) that I'd like to be able to pass parameters from test() when I add it to the eventListener. Is this possible? Like say to get the mainlink variable to pass through the parameters. Is this even good to do? I put the javascript below, I also have it on jsbin: http://jsbin.com/ujahe3/9/edit function test() { if (!document.getElementById('myid')) { var mainlink = document.getElementById('mainlink'); var newElem = document.createElement('span'); mainlink.appendChild(newElem); var linkElemAttrib = document.createAttribute('id'); linkElemAttrib.value = "myid"; newElem.setAttributeNode(linkElemAttrib); var linkElem = document.createElement('a'); newElem.appendChild(linkElem); var linkElemAttrib = document.createAttribute('href'); linkElemAttrib.value = "jsbin.com"; linkElem.setAttributeNode(linkElemAttrib); var linkElemText = document.createTextNode('new click me'); linkElem.appendChild(linkElemText); if (document.addEventListener) { document.addEventListener('click', check/*(WOULD LIKE TO PASS PARAMETERS HERE)*/, false); }; }; }; function check(e) { if (document.getElementById('myid')) { if (document.getElementById('myid').parentNode === document.getElementById('mainlink')) { var target = (e && e.target) || (event && event.srcElement); var obj = document.getElementById('mainlink'); if (target!= obj) { obj.removeChild(obj.lastChild); }; }; }; };

    Read the article

  • How do I merge a local branch into TFS

    - by Johnny
    hi, I did a stupid thing and branched my project on my local disk instead of doing it on the TFS. So now I have two projects on my disk: the old one which has TFS bindings and the new, which doesn't. I want to merge those changes back into the TFS project. How would I go about doing that? I can't do Compare because my local branch has no TFS bindings. There should be some way to compare the differences between the two projects locally and then meld the differences into the old project and check-in, but I can't find an easy way of doing that. Any other solutions?

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • OpenCV to use in memory buffers or file pointers

    - by The Unknown
    The two functions in openCV cvLoadImage and cvSaveImage accept file path's as arguments. For example, when saving a image it's cvSaveImage("/tmp/output.jpg", dstIpl) and it writes on the disk. Is there any way to feed this a buffer already in memory? So instead of a disk write, the output image will be in memory. I would also like to know this for both cvSaveImage and cvLoadImage (read and write to memory buffers). Thanks! My goal is to store the Encoded (jpeg) version of the file in Memory. Same goes to cvLoadImage, I want to load a jpeg that's in memory in to the IplImage format.

    Read the article

  • NTFS-compressing Virtual PC disks (on host and/or guest)

    - by nlawalker
    I'm hoping someone here can answer these definitively: Does putting a VHD file in an NTFS-compressed folder on the host improve performance of the virtual machine, diminish performance, or neither? What about using NTFS compression within the guest? Does using compresssion on either the host or the guest lead to any problems like read or write errors? If I were to put a VHD in a compressed folder on the host, would I benefit from compacting it? I've seen references to using NTFS compression on quite a few VPC "tips and tricks" blog posts, and it seems like half of them say to never do it and the other half say that not only does it save disk space but it actually can improve performance if you have a fast CPU and your primary performance bottleneck is the disk.

    Read the article

  • Timer Service in ejb 3.1 - schedule calling timeout problem

    - by Greg
    Hi Guys, I have created simple example with @Singleton, @Schedule and @Timeout annotations to try if they would solve my problem. The scenario is this: EJB calls 'check' function every 5 secconds, and if certain conditions are met it will create single action timer that would invoke some long running process in asynchronous fashion. (it's sort of queue implementation type of thing). It then continues to check, but as long as long running process is there it won't start another one. Below is the code I came up with, but this solution does not work, because it looks like asynchronous call I'm making is in fact blocking my @Schedule method. @Singleton @Startup public class GenerationQueue { private Logger logger = Logger.getLogger(GenerationQueue.class.getName()); private List<String> queue = new ArrayList<String>(); private boolean available = true; @Resource TimerService timerService; @Schedule(persistent=true, minute="*", second="*/5", hour="*") public void checkQueueState() { logger.log(Level.INFO,"Queue state check: "+available+" size: "+queue.size()+", "+new Date()); if (available) { timerService.createSingleActionTimer(new Date(), new TimerConfig(null, false)); } } @Timeout private void generateReport(Timer timer) { logger.info("!!--timeout invoked here "+new Date()); available = false; try { Thread.sleep(1000*60*2); // something that lasts for a bit } catch (Exception e) {} available = true; logger.info("New report generation complete"); } What am I missing here or should I try different aproach? Any ideas most welcome :) Testing with Glassfish 3.0.1 latest build - forgot to mention

    Read the article

  • Why my events aren't registered after postback?

    - by lucian.jp
    I have a problem where I have a page with a checkbox : <asp:CheckBox runat="server" ID="chkSelectAll" OnCheckedChanged="chkSelectAll_CheckedChanged" EnableViewState="true" Text='<%# CommonFunctions.GetText(Page, "CreateTask", "chkSelectAll.text") %>' AutoPostBack="true" /> First I had the problem that when the event was raised the checkbox wasn't keeping the checked property value. This is still strange because nowhere in the application we set the EnableViewState to false and by default it should be true. Well this behavior was fixed by puting EnableViewState="true". So now, I have a checkbox that when I first load the page, wworks correctly. I check the box, the event is raised Checked property valu is good and the method is executed. Postback completes and renders the page. I check the box back to false, the event is not triggered and the page is reloaded with the previous data but the Checked Property stays at true. So after the first postback I seem to loose behavior of event and ViewState. We suspect the fact that we move dynamically the controls in the page (See answer here) But this suspicion doesn't explain why everything works perfectly after the first load and stops working after first PostBack. EDIT CODE SAMPLE I created a test project where you can experience my problem. At load if you check any checkbox both raise their respective event. After postback only the second raise both events. I seem to have a problem when I register the events after moving control.

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • What's wrong with this code to un-camel-case a string?

    - by omair iqbal
    Here is my attempt to solve the About.com Delphi challenge to un-camel-case a string. unit challenge1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type check = 65..90; TForm1 = class(TForm) Edit1: TEdit; Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form1: TForm1; var s1,s2 :string; int : integer; implementation {$R *.dfm} procedure TForm1.Button1Click(Sender: TObject); var i: Integer; checks : set of check; begin s1 := edit1.Text; for i := 1 to 20 do begin int :=ord(s1[i]) ; if int in checks then insert(' ',s1,i-1); end; showmessage(s1); end; end. check is a set that contains capital letters so basically whenever a capital letter is encountered the insert function adds space before its encountered (inside the s1 string), but my code does nothing. ShowMessage just shows text as it was entered in Edit1. What have I done wrong?

    Read the article

  • How to serve a View as CSV in ASP.NET Web Forms

    - by ChessWhiz
    Hi, I have a MS SQL view that I want to make available as a CSV download in my ASPNET Web Forms app. I am using Entity Framework for other views and tables in the project. What's the best way to enable this download? I could add a HyperLink whose click handler iterates over the view, writes its CSV form to the disk, and then serves that file. However, I'd prefer not to write to the disk if it can be avoided, and that involves iteration code that may be avoided with some other solution. Any ideas?

    Read the article

  • How do I create a selection list in ASP.NET MVC?

    - by Gary McGill
    I have a database table that records what publications a user is allowed to access. The table is very simple - it simply stores user ID/publication ID pairs: CREATE TABLE UserPublication (UserId INTEGER, PublicationID INTEGER) The presence of a record for a given user & publication means that the user has access; absence of a record implies no access. I want to present my admin users with a simple screen that allows them to configure which publications a user can access. I would like to show one checkbox for each of the possible publications, and check the ones that the user can currently access. The admin user can then check or un-check any number of publications and submit the form. There are various publication types, and I want to group the similarly-typed publications together - so I do need control over how the publications are presented (I don't want to just have a flat list). My view model obviously needs to have a list of all the publications (since I need to display them all regardless of the current selection), and I also need a list of the publications that the user currently has access to. (I'm not sure whether I'd be better off with a single list where each item includes the publication ID and a yes/no field?). But that's as far as I've got. I've really no idea how to go about binding this to some checkboxes. Where do I start?

    Read the article

  • Red Hat cluster: Failure of one of two services sharing the same virtual IP tears down IP

    - by js.
    I'm creating a 2+1 failover cluster under Red Hat 5.5 with 4 services of which 2 have to run on the same node, sharing the same virtual IP address. One of the services on each node needs a (SAN) disk, the other doesn't. I'm using HA-LVM. When I shut down (via ifdown) the two interfaces connected to the SAN to simulate SAN failure, the service needing the disk is disabled, the other keeps running, as expected. Surprisingly (and unfortunately), the virtual IP address shared by the two services on the same machine is also removed, rendering the still-running service useless. How can I configure the cluster to keep the IP address up?

    Read the article

  • Fully automated SQL Server Restore

    - by hasen j
    I'm not very fluent with SQL Server commands. I need a script to restore a database from a .bak file and move the logical_data and logical_log files to a specific path. I can do: restore filelistonly from disk='D:\backups\my_backup.bak' This will give me a result set with a column LogicalName, next I need to use the logical names from the result set in the restore command: restore database my_db_name from disk='d:\backups\my_backups.bak' with file=1, move 'logical_data_file' to 'd:\data\mydb.mdf', move 'logical_log_file' to 'd:\data\mylog.ldf' How do I capture the logical names from the first result set into variables that can be supplied to the "move" command? I think the solution might be trivial, but I'm pretty new to SQL Server.

    Read the article

  • F# and statically checked union cases

    - by Johan Jonasson
    Soon me and my brother-in-arms Joel will release version 0.9 of Wing Beats. It's an internal DSL written in F#. With it you can generate XHTML. One of the sources of inspiration have been the XHTML.M module of the Ocsigen framework. I'm not used to the OCaml syntax, but I do understand XHTML.M somehow statically check if attributes and children of an element are of valid types. We have not been able to statically check the same thing in F#, and now I wonder if someone have any idea of how to do it? My first naive approach was to represent each element type in XHTML as a union case. But unfortunately you cannot statically restrict which cases are valid as parameter values, as in XHTML.M. Then I tried to use interfaces (each element type implements an interface for each valid parent) and type constraints, but I didn't manage to make it work without the use of explicit casting in a way that made the solution cumbersome to use. And it didn't feel like an elegant solution anyway. Today I've been looking at Code Contracts, but it seems to be incompatible with F# Interactive. When I hit alt + enter it freezes. Just to make my question clearer. Here is a super simple artificial example of the same problem: type Letter = | Vowel of string | Consonant of string let writeVowel = function | Vowel str -> sprintf "%s is a vowel" str I want writeVowel to only accept Vowels statically, and not as above, check it at runtime. How can we accomplish this? Does anyone have any idea? There must be a clever way of doing it. If not with union cases, maybe with interfaces? I've struggled with this, but am trapped in the box and can't think outside of it.

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • ADO.NET Multiple simultaneous reads from an open database.

    - by Deverill
    Answer not needed - my logic was wrong and this question is invalid. Charles helped me see where I went off-tracks. Thanks I have a utility that moves data from one source to another. In the process of writing the record I check to see if it exists and do an update/insert as necessary. The difficulty I have is that as I'm writing the main record info there is a 2nd table for "custom data" that I have to check to see if it exists and do an update/insert for that as well. Example: I may be loading a pencil sharpener that may or may not exist. While I'm writing it into destination it has characteristics such as style, color, etc. and each of them may or may not exist. As written I seem to need to have 2 DataReaders open, one for the sharpener and one to check for and update color. I am new to ADO.NET, but not to programming and it's more complicated than I listed but for sanity's sake I didn't put all the details. My question is: What am I missing? You can't have 2 readers open at the same time on a connection, yet I can't close the first if I'm stepping through all products. It seems inefficient to have 2 connections, readers, etc. for this. Is there a feature of ADO.NET DBs that I'm missing? How would you do it? Thanks!

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Nested hyperlinks in XHTML 1.1 document

    - by Nazgulled
    Hi, I'm doing a simple widget for WordPress that fetches the most recent tweets from the RSS feed provided by Twitter. This widget parses any link posted on a tweet, it also parses mentions (ie: @username) and trending topics (ie: #nowplaying). For these 3 situations, it creates links pointing to some Twitter feature. For instance: "Hi @UserA, check out the song Foo from FooBar that I'm listening, it's awesome. #nowplaying" And it will parse into this: Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> Now now I need to add a global link to the whole message, like this: <a href="http://twitter.com/UserA/statuses/1234567890"> Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> </a> But this code does not validate and it doesn't work anyways (the browsers don't really seem to know what to do with it). Any suggestions how could I fix this?

    Read the article

  • 2D Platformer Collision Problems With Both Axes

    - by AusGat
    I'm working on a little 2D platformer/fighting game with C++ and SDL, and I'm having quite a bit of trouble with the collision detection. The levels are made up of an array of tiles, and I use a for loop to go through each one (I know it may not be the best way to do it, and I may need help with that too). For each side of the character, I move it one pixel in that direction and check for a collision (I also check to see if the character is moving in that direction). If there is a collision, I set the velocity to 0 and move the player to the edge of the tile. My problem is that if I check for horizontal collisions first, and the player moves vertically at more than one pixel per frame, it handles the horizontal collision and moves the character to the side of the tile even if the tile is below (or above) the character. If I handle vertical collision first, it does the same, except it does it for the horizontal axis. How can I handle collisions on both axes without having those problems? Is there any better way to handle collision than how I'm doing it?

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >