Search Results

Search found 19125 results on 765 pages for 'hard disk'.

Page 115/765 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Splitting a raidctl mirror safely

    - by milkfilk
    I have a Sun T5220 server with the onboard LSI card and two disks that were in a RAID 1 mirror. The data is not important right now but we had a failed disk and are trying to understand how to do this for real if we had to recover from a failure. The initial situation looked like this: # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size Level Disk ---------------------------------------------------------------- c1t0d0 136.6G N/A DEGRADED OFF RAID1 0.1.0 136.6G GOOD N/A 136.6G FAILED Green light on the 0.0.0 disk. Find / lights up the 0.1.0 disk. So I know I have a bad drive and which one it is. Server still boots obviously. First, we tried putting a new disk in. This disk came from an unknown source. Format would not see it, cfgadm -al would not see it so raidctl -l would not see it. I figure it's bad. We tried another disk from another spare server: # raidctl -c c1t1d0 c1t0d0 (where t1 is my good disk - 0.1.0) Disk has occupied space. Also the different syntax options don't change anything: # raidctl -C "0.1.0 0.0.0" -r 1 1 Disk has occupied space. # raidctl -C "0.1.0 0.0.0" 1 Disk has occupied space. Ok. Maybe this is because the disk from the spare server had a RAID 1 on it already. Aha, I can see another volume in raidctl: # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Volume:c1t132d0 (this is the foreign root mirror) Disk: 0.0.0 Disk: 0.1.0 ... No problem. I don't care about the data, I'll just delete the foreign mirror. # raidctl -d c1t132d0 (warning about data deletion but it works) At this point, /usr/bin/ binaries freak out. By that I mean, ls -l /usr/bin/which shows 1.4k but cat /usr/bin/which gives me a newline. Great, I just blew away the binaries (ie: binaries in mem still work)? I bounce the box. It all comes back fine. WTF. Anyway, back to recreating my mirror. # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Disk: 0.0.0 Disk: 0.1.0 ... Man says that you can delete a mirror and it will split it. Ok, I'll delete the root mirror. # raidctl -d c1t0d0 Array in use. (this might not be the exact error) I googled this and found of course you can't do this (even with -f) while booted off the mirror. Ok. I boot cdrom -s and deleted the volume. Now I have one disk that has a type of "LSI-Logical-Volume" on c1t1d0 (where my data is) and a brand new "Hitachi 146GB" on c1t0d0 (what I'm trying to mirror to): (booted off the CD) # raidctl -c c1t1d0 c1t0d0 (man says it's source destination for mirroring) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" -r 1 1 (alt syntax per man) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" 1 (assumes raid1, no help) Illegal Array Layout. Same size disks, same manufacturer but I did delete the volume instead of throwing in a blank disk and waiting for it to resync. Maybe this was a critical error. I tried selecting the type in format for my good disk to be a plain 146gb disk but it resets the partition table which I'm pretty sure would wipe the data (bad if this was production). Am I boned? Anyone have experience with breaking and resyncing a mirror? There's nothing on Google about "Illegal Array Layout" so here's my contrib to the search gods.

    Read the article

  • 3 dimensional bin packing algorithms

    - by BuschnicK
    I'm faced with a 3 dimensional bin packing problem and am currently conducting some preliminary research as to which algorithms/heuristics are currently yielding the best results. Since the problem is NP hard I do not expect to find the optimal solution in every case, but I was wondering: 1) what are the best exact solvers? Branch and Bound? What problem instance sizes can I expect to solve with reasonable computing resources? 2) what are the best heuristic solvers? 3) What off-the-shelf solutions exist to conduct some experiments with?

    Read the article

  • Using STL/Boost to initialize a hard-coded set<vector<int> >

    - by Hooked
    Like this question already asked, I'd like to initialize a container using STL where the elements are hard-coded in the cleanest manner possible. In this case, the elements are a doubly nested container: set<vector<int> > A; And I'd like (for example) to put the following values in: A = [[0,0,1],[0,1,0],[1,0,0],[0,0,0]]; C++0x fine, using g++ 4.4.1. STL is preferable as I don't use Boost for any other parts of the code (though I wouldn't mind an example with it!).

    Read the article

  • Passing sql results to views hard-codes views to database column names

    - by Galen
    I just realized that i may not be following best practices in regards to the MVC pattern. My issue is that my views "know" information about my database Here's my situation in psuedo code... My controller invokes a method from my model and passes it directly to the view view.records = tableGateway.getRecords() view.display() in my view each records as record print record.name print record.address ... In my view i have record.name and record.address, info that's hard-coded to my database. Is this bad? What other ways around it are there other than iterating over everything in the controller and basically rewriting the records collection. And that just seems silly. Thanks

    Read the article

  • How hard is FizzBuzz? [closed]

    - by Josh K
    After reading various blog entries I took it upon myself to code a FizzBuzz program in PHP. class FizzBuzz { function __construct() { } function go() { for($i = 1; $i < 101; $i++) { if($i % 3 == 0 and $i % 5 == 0) { echo("FizzBuzz\n"); continue; } else if($i % 3 == 0) { echo("Fizz\n"); continue; } else if($i % 5 == 0) { echo("Buzz\n"); continue; } else { echo($i."\n"); } } } } $FB = new FizzBuzz(); $FB->go(); Created the FizzBuzz object just because I could, I complete this in under five minutes. Is it really that hard to do?

    Read the article

  • Recover data from hard disk

    - by Hitesh Solanki
    Hi I have formatted my c: drive and window xp is installed successfully,but I cannot able to access d: drive. when I am trying to double click on the d: drive,following message is displayed: "the disk in drive D: is not formatted, do you want to format it now ? " When I am trying to access from command prompt,the following message is displayed: "The volume does not contain a recognized file system. Please make sure that all required file system drivers are loaded and that the volume is not corrupted." So please help me.... Thanks in advance....

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • Hard-coded 8191 10485 values in JavaScript rounding function

    - by Matthew Hegarty
    I've seen the following (bizarre) Javascript rounding function in some legacy code. After googling for it I can see that it crops up in a number of places online. However I can't work out why the hard-coded values 8191 and 10485 are present. Does anyone know if there's any sensible reason why these values are included? If not, hopefully we can kill off the meme! function roundNumber(num,dec) { var newnumber = 0; if (num > 8191 && num < 10485) { num = num-5000; newnumber = Math.round(num*Math.pow(10,dec))/Math.pow(10,dec); newnumber = newnumber+5000; } else { newnumber = Math.round(num*Math.pow(10,dec))/Math.pow(10,dec); } return newnumber; }

    Read the article

  • PlaceHolderMain controlling td width of hard-coded values

    - by Linda
    In my custom .master page I have the following code: <asp:ContentPlaceHolder id="PlaceHolderMain" runat="server" Visible="true" /> This prints out the main content of my page. It contains this structure <table ID="OuterZoneTable" width="100%"> <tr>...</tr> <tr id="OuterRow"> <td width="80%" id="OuterLeftCell">...</td> <td width="180" id="OuterRightCell">...</td> </tr> ... </table> I want to control the width of #OuterLeftCell and #OuterRightCell but it is hard-coded in the html that is returned. How would I change these values?

    Read the article

  • Disk-based caching of dynamic images in IIS 7

    - by Daniel Schierbeck
    I'm writing an image server which needs to handle a relatively large number of concurrent requests (~5,000). The images being served are dynamically scaled down and cropped based on per-image specifications, which are queried from a database. The number of images is rather large, so an in-memory cache isn't viable (thrashing would most definitely occur). I'm using native caching in IIS 7 to avoid hitting the ASP.NET app which generates the images on-the-fly. I've looked around, but I couldn't find a simple way to configure IIS to store the cache on-disk -- is there such an option, or would I need to roll my own? I'd rather avoid placing the generated images in a public folder, so they can be served statically, since I would prefer to invalidate the cache entries using a query parameter (last-edit time from the database,) which doesn't seem possible to reconcile with static caching. I would love to get some feedback on this!

    Read the article

  • How should I protect against hard link attacks?

    - by Thomas
    I want to append data to a file in /tmp. If the file doesn't exist I want to create it I don't care if someone else owns the file. The data is not secret. I do not want someone to be able to race-condition this into writing somewhere else, or to another file. What is the best way to do this? Here's my thought: fd = open("/tmp/some-benchmark-data.txt", O_APPEND | O_CREAT | O_NOFOLLOW | O_WRONLY, 0644); fstat(fd, &st); if (st.st_nlink != 1) { HARD LINK ATTACK! } What's the right way? Besides not using a world-writable directory.

    Read the article

  • SQL Server 2008, not enough disk space

    - by snorlaks
    Hello, I'm executing sql query on my database. I have SQL Server 2008 installed on my D harddrive which has 55 GB free space. I have also C drive which has sth like 150 MB free (right now). While executing that query on quite a big table (16 GB) I have an error: An error occurred while executing batch. Error message is: Not enough disk space. I would like to know if there is any possibility that I can make SQL Server to use D drive instead of C Or maybe there is any other problem with what I'm doing ? Thanks for help

    Read the article

  • Reject (Hard 404) ASP.NET MVC-style URLs

    - by James D
    Hi, ASP.NET MVC web app that exposes "friendly" URLs: http://somesite.com/friendlyurl ...which are rewritten (not redirected) to ASP.NET MVC-style URLs under the hood: http://somesite.com/Controller/Action The user never actually sees any ASP.NET MVC style URLS. If he requests one, we hard 404 it. ASP.NET MVC is (in this app) an implementation detail, not a fundamental interface. My question: how do you examine an arbitrary incoming URL and determine whether or not that URL matches a defined ASP.NET MVC path? For extra credit: how do you do it from inside an ASP.NET-style IHttpModule, where you're getting invoked upstream from the ASP.NET MVC runtime? Thanks!

    Read the article

  • Chrome and its Spellcheck -- How Hard Would This Be To Implement

    - by bobber205
    From a programmer's perspective. The dictionary in chrome, Google's "own" browser, does not have the same dictionary as their search engine. Countless times I have right clicked on a poorly misspelled word only to have no correct spelling appear. I Google the word and almost 100% of the time it knows what I was trying to type. :) I realize there are probably very good reasons for this, but why can't Chrome simply Google for a word when you right click on it when it can't find a correct spelling. I am sometimes a bad speller and it would be really nice if the spell check in Chrome at least utilized Google's landmark product to provide more accurate word spelling lookups. How hard must this be to implement or are there other reasons you think they are not?

    Read the article

  • How can I read JSON file from disk and store to array in Swift

    - by Ezekiel Elin
    I want to read a file from Disk in a swift file. It can be a relative or direct path, that doesn't matter. How can I do that? I've been playing with something like this let classesData = NSData .dataWithContentsOfMappedFile("path/to/classes.json"); And it finds the file (i.e. doesn't return nil) but I don't know how to manipulate and convert to JSON, the data returned. It isn't in a string format and String() isn't working on it.

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • Saving a movie generated with Jython/JES on local disk

    - by Golgauth
    I made an autogenerated movie clip using JES (Jython Environment for Students). I can play it without any problem using playMovie(), but I can't figure out how to have it saved physically on disk. The full script is located here. ... movie = synthesizeFrameAndCreateMovie("D:\\FOLDER") print movie writeQuicktime(movie,"D:\\FOLDER\\movie.mov", 30) [LINE 35] #playMovie(movie) I get this error when calling the function writeQuicktime(): >>> ======= Loading Progam ======= Movie, frames: 60 The error was: Index: 0, Size: 0 I wasn't able to do what you wanted. The error java.lang.IndexOutOfBoundsException has occured Please check line 35 Note : I also tried the function writeAVI(), with the exact same result. This error sounds like a java bug in Jython/JES library. I am running JES under Windows 7 and have all the common Quicktime and AVI codex installed as well as the QTjava library in my jre... Any brilliant idea ? EDIT : Also tried the Linux version with same result for both QuickTime and AVI...

    Read the article

  • Get File Size of Modified Image Before Writing to Disk

    - by Otaku
    I'm doing a conversion from .jpg to .png in System.Drawing and one thing that I've found is that this conversion tends to make the resulting converted .png much larger than the .jpg original. Sometimes more than 10x larger after converting to .png. Given that seems to always be the case (unless you know of a way around this), is there any way to determine the file size of that .png before it is saved to disk? For example, maybe write it to a stream first and then get that stream size? How would I go about doing this?

    Read the article

  • Python Textwrap - forcing 'hard' breaks

    - by Tom Werner
    I am trying to use textwrap to format an import file that is quite particular in how it is formatted. Basically, it is as follows (line length shortened for simplicity): abcdef <- Ok line abcdef ghijk <- Note leading space to indicate wrapped line lm Now, I have got code to work as follows: wrapper = TextWrapper(width=80, subsequent_indent=' ', break_long_words=True, break_on_hyphens=False) for l in lines: wrapline=wrapper.wrap(l) This works nearly perfectly, however, the text wrapping code doesn't do a hard break at the 80 character mark, it tries to be smart and break on a space (at approx 20 chars in). I have got round this by replacing all spaces in the string list with a unique character (#), wrapping them and then removing the character, but surely there must be a cleaner way? N.B Any possible answers need to work on Python 2.4 - sorry!

    Read the article

  • Dynamic Button Layout without hard coding positions

    - by mmc
    I would like to implement a view (it will actually be going into the footer of a UITableView) that implements 1-3 buttons. I would like the layout of these buttons (and the size of the buttons themselves) to change depending on which ones are showing. An example of almost the exact behavior I would want is at the bottom of the Info screen in the Contacts application when you are looking at a specific contact ("Add to Favorites" disappears when it is clicked, and the other buttons expand to take up the available space in a nice animation). As I'm running through the different layout scenarios, I'm hard coding all these values and I just think... "this is NOT the way I'm supposed to be doing this." It even APPEARS that Interface Builder has some features that may do this for me in the ruler tab, but I'll be darned if I can figure out how they work (if they work) on the phone. Does anyone have any suggestions, or a tutorial online that I could look at to point me in the right direction? Thanks much.

    Read the article

  • Save NSCache Contents to Disk

    - by Cory Imdieke
    I'm writing an app that needs to keep an in-memory cache of a bunch of objects, but that doesn't get out of hand so I'm planning on using NSCache to store it all. Looks like it will take care of purging and such for me, which is fantastic. I'd also like to persist the cache between launches, so I need to write the cache data to disk. Is there an easy way to save the NSCache contents to a plist or something? Are there perhaps better ways to accomplish this using something other than NSCache? This app will be on the iPhone, so I'll need only classes that are available in iOS 4+ and not just OS X. Thanks!

    Read the article

  • Writing photos (and other things) to disk and getting them later in iphone

    - by ebabchick
    I'm trying to write an image to disk: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsPath = [paths objectAtIndex:0]; NSString* savePath = [documentsPath stringByAppendingPathComponent:@"photo"]; BOOL result = [data writeToFile:savePath atomically:YES]; if(result) NSLog(@"Save of photo success!"); Then, later, I try to retrieve for a table view cell image: NSData* getPhoto = [NSData dataWithContentsOfFile:[leaf content]]; UIImage* myImage = [UIImage imageWithData:getPhoto]; cell.imageView.image = myImage; Yes, I checked to make sure the [leaf content] returns the same NSString as savePath. What else could be the problem? [NSData dataWithContentsOfFile:[leaf content]]; returns nil.... Thanks guys

    Read the article

  • Improve disk read performance (multiple files) with threading

    - by pablo
    I need to find a method to read a big number of small files (about 300k files) as fast as possible. Reading them sequentially using FileStream and reading the entire file in a single call takes between 170 and 208 seconds (you know, you re-run, disk cache plays its role and time varies). Then I tried using PInvoke with CreateFile/ReadFile and using FILE_FLAG_SEQUENTIAL_SCAN, but I didn't appreciate any changes. I tried with several threads (divide the big set in chunks and have every thread reading its part) and this way I was able to improve speed just a little bit (not even a 5% with every new thread up to 4). Any ideas on how to find the most effective way to do this?

    Read the article

  • writing structs and classes to disk

    - by Phenom
    The following function writes a struct to a file. int btwrite(short rrn, BTPAGE *page_ptr) { long addr; addr = (long) rrn * (long) PAGESIZE + HEADERSIZE; lseek(btfd, addr, 0); return (write(btfd, page_ptr, PAGESIZE)); } The following is the struct. typedef struct { short keycount; /* number of keys in page */ int key[MAXKEYS]; /* the actual keys */ int value[MAXKEYS]; /* the actual values */ short child[MAXKEYS+1]; /* ptrs to rrns of descendants */ } BTPAGE; What would happen if I changed the struct to a class, would it still work the same? If I added class functions, would the size it takes up on disk increase?

    Read the article

  • hp dl580 g5 diag error

    - by maruti
    server disk access is slow, and running insight diagnostics reports this error: Error: 640003 DST Error Error: 640006: The Read and/or Write HARD error rate is above threshold This drive has experienced/recorded error conditions reported by diagnosis and requires replacement"

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >