Search Results

Search found 10860 results on 435 pages for 'bad blocks'.

Page 59/435 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • EntLib for Windows Azure

    - by kaleidoscope
    Enterprise Library popularly known as EntLib is a collection of Application Blocks targeted at managing oft needed redundant tasks in enterprise development, like Logging, Caching, Validation, Cryptography etc. Entlib currently exposes 9 application blocks: Caching Application Block Cryptography Application Block Data Access Application Block Exception Handling Application Block Logging Application Block Policy Injection Application Block Security Application Block Validation Application Block Unity Dependency Injection and Interception Mechanism Ever since the Honeymoon period of PoCs and tryouts is over and Azure started to mainstream and more precisely started to go “Enterprise”, Azure developers have been demanding EntLib for Azure. The demands seems to have finally been heard and the powers that be have bestowed us with the current beta release EntLib 5.0 which supports Windows Azure. The application blocks tailored for Azure are: Data Access Application Block (Think SQL Azure) Exception Handling Application Block (Windows Azure Diagnostics) Logging Application Block (Windows Azure Diagnostics) Validation Application Block Unity Dependency Injection Mechanism The EntLib 5.0 beta is now available for download. Technorati Tags: Sarang,EntLib,Azure

    Read the article

  • What am I (a beginner) losing, by choosing Cherokee over Apache for serving dynamic content?

    - by Bad Learner
    I am a complete beginner and am planning to setup a photo sharing site. This is the setup I am planning, basically for a start - - Cherokee (instead of Apache) for serving dynamic content (python-based application), and Nginx for serving static files. Since, I am a beginner, what have I, to lose? So, can someone, from your experience, please tell me, what I'd lose by choosing Cherokee over Apache for serving dynamic content in PHP/Python/whatever? Anything other than the fact that there's lot of documentation, many people who can help when there's an issue etc as Apache is well established and the most popular web server? Again, my intention is not to spurt a flame war here. Just wanted to know if Cherokee would be better than Apache in terms of performance, reliability, and speed, when it matters (peak load times). Also, I heard it's a lot faster than Apache in serving dynamic content, is it true?

    Read the article

  • Writing to the UI with MonoDroid using RunOnUIThread

    - by Wallym
    I've been pulling my hair out over the past day or so trying to update the UI in my test app.  I was having problem after problem.  I finally got down to my base problem.  I could not write out to my TextView.  WTF could be causing that?  I can write to my UI in other parts of my app.  This is pure craziness.  I thought long and hard and nothing was coming to me.  Wait, the light bulb went on.  I am in the wrong thread.  Great, how do I write in the correct thread?  MonoDroid supports the entire AsyncTask set of objects, but this seemed like overkill.  I was reading and came across RunOnUIThread().......Bing..........The lightbulb has been invented...BlueStar Airlines (oh wait, wrong context). Anyway, here is what I needed:this.RunOnUiThread(() => TextViewControl.Text = "Hello World"); Enjoy!!!!!!!  Remember kiddies, running on the main ui for off device operations is bad, not as bad as crossing the streams bad, but bad as in trying to drive on a flat tire bad. It won't kill you, but it does keep you from getting anywhere.

    Read the article

  • Self Documenting Code Vs. Commented Code

    - by Phill
    I had a search but didn't find what I was looking for, please feel free to link me if this question has already being asked. Earlier this month this post was made: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/ Basically to sum it up, you're a bad programmer if you don't write comments. My personal opinion is that code should be descriptive and mostly not require comment's unless the code cannot be self describing. In the example given // Get the extension off the image filename $pieces = explode('.', $image_name); $extension = array_pop($pieces); The author said this code should be given a comment, my personal opinion is the code should be a function call that is descriptive: $extension = GetFileExtension($image_filename); However in the comments someone actually made just that suggestion: http://net.tutsplus.com/tutorials/php/why-youre-a-bad-php-programmer/comment-page-2/#comment-357130 The author responded by saying the commenter was "one of those people", i.e, a bad programmer. What are everyone elses views on Self Describing Code vs Commenting Code?

    Read the article

  • I have a bad install of Windows on another hard drive and it won't let me install a fresh copy. How do I fix it in Ubuntu 12.04?

    - by Dana LaBerge
    Basically, there was a security issue in the drivers for my graphics card. It was a 64-bit card and I installed 32-bit Windows. Apparently, before SP1 was available, which fixed that issue, 6 trojan horses got in. They stopped SP1 from installing. After going through the ringer several times, I finally talked to a person who knew the problem. It was something about how the drivers tried to transfer between the 32-bit OS and the 64-bit card that left me open. Ever since, my computer has been slow and has had weird issues. Like tinypic wouldn't ever load. Also, certain programs wouldn't install. So I eventually talk to the dude that knew the problem and he takes the reigns and does some diagnostics. He tells me that to fix it I have to format the hard drive and do a fresh install. I'm okay with that because I was planning on it anyway, to upgrade to the 64-bit version. The problem is, how do I do that? I have the disk to install the new copy, but when I go to install it, it tells me I can't and to check the log file. However, I don't know where that log file is, and it wiped my install of Windows out. How do I find the file and as a different route to get to the goal, how do I zero out the drive from Ubuntu 12.04? (I installed the 64-bit version just the other day)

    Read the article

  • MPEG-2 playback inconsistent

    - by DustByte
    Many years ago I gave up on Linux because video playback was choppy. Now I'm back, and video playback is still playing up... I have two MPEG files: good.mpg bad.mpg. Here is some information about the two files, using avprobe: My machine is Intel Core 2 Duo E8400 @ 3.00GHz x 2, 64-bit. I do not know what graphics card I have. I run Ubuntu 12.04. So far I have had no problems with YouTube and playback of various video files, including playback of the file good.mpg, included in the avprobe snapshot above. However, the file bad.mpg gives me headache! The file bad.mpg is produced by a respectable "Old-video-tapes-to-DVD" company. I converted over 10 Video-8 tapes to MPEG through them, and today I collected my hard drive containing the MPEG files. Unfortunately I have problem watching them! Here are some details: Using Totem Movie Player 3.0.1 works well for several seconds, then it gets choppy and the playback is not at all smooth. Also the player easily freezes for a while when trying to jump to another position in the file. Most strangely though, the total time is shown as 0:42 (42 seconds) instead of the true 00:39:11: The VLC media player is doing a better job. It shows the correct total length, but as soon as I jump in the video to a new position, it stalls. Playback also stalls after 30 seconds if I press play and leave it. Using Handbrake and choosing bad.mpg as the source, gives me: There is only one title to choose, and it is 6 min 53 seconds. I would have guessed the full 39 minutes of the video should have shown. Lastly, putting the file bad.mpg in Dropbox and viewing it on my iPad with the Dropbox app seems fine (disregard the lack of easy jumping forward due to real-time encoding when streaming it). My question is simple: What is going on?! Why do I have problem to play the MPEG-2 files I just paid good money for (the issue with bad.mpg applies to all files I had encoded)? Is it an issue with my particular Linux machine? The graphics card? But why has everything worked fine so far, and why does not the good.mpg file cause any problems?

    Read the article

  • Windows 7 64bit will not register a 32bit DLL

    - by Bad Neighbor
    I'm trying to install a 32bit Oracle instant client onto several Windows 7 PCs. This version is the one required by the customer's software. I have successfully installed it on about a dozen PCs using the same installer, but two machines refuse to register a DLL. The two PCs are of different make and model. I have been able to install this software in the past on these models. This is the error that the installer throws up: The file copies to the location referenced above. If I choose to ignore the error and manually register it later, I get the following error: This error is returned whether I use the 32bit (syswow64) or 64bit version of regsvr32. Command Prompt is run as admin, and the ID with which I'm logged into the PC is an admin. I've tried copying the file into the syswow64 folder, but I get the same error. This same installer works on other PCs. To further complicate the issue, one of the two PCs also will not register an OCX file from a different 32bit installer: Both PCs are relatively new and have standard software installed. We use MS Forefront for security, but disabling that didn't change the behavior. What am I missing?

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • ipod not mounting

    - by rls
    Tried to connect my iPod, but got this message: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Have seen links to this here, but beeing rather green, I don't understand much. https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/734883 What do I do now? The dmesg|tail says [ 2819.709437] sd 6:0:0:0: [sdb] 3901376 4096-byte logical blocks: (15.9 GB/14.8 GiB) [ 2819.710161] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2819.735294] sdb: [mac] sdb1 sdb2 [ 2819.738060] sd 6:0:0:0: [sdb] 3901376 4096-byte logical blocks: (15.9 GB/14.8 GiB) [ 2819.738671] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2819.738688] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 2820.420130] sd 6:0:0:0: [sdb] Bad block number requested [ 2820.420167] hfs: unable to find HFS+ superblock [ 2820.612140] sd 6:0:0:0: [sdb] Bad block number requested [ 2820.612191] hfs: unable to find HFS+ superblock

    Read the article

  • My boss decided to add a "person to blame" field to every bug report. How can I convince him that it's a bad idea?

    - by MK_Dev
    In one of the latest "WTF" moves, my boss decided that adding a "Person To Blame" field to our bug tracking template will increase accountability (although we already have a way of tying bugs to features/stories). My arguments that this will decrease morale, increase finger-pointing and would not account for missing/misunderstood features reported as bug have gone unheard. What are some other strong arguments against this practice that I can use? Is there any writing on this topic that I can share with the team and the boss? I find this sort of culture unacceptable to work in but want to try and change it before jumping ship. Any input is appreciated.

    Read the article

  • Alternatives to OAuth?

    - by sdolgy
    The Web industry is shifting / has shifted towards using OAuth when extending API services to external consumers & developers. There is some elegance in simple....and well, the 3-step OAuth process isn't too bad ... i just find it is the best of a bad bunch of options. Are there alternatives out there that could be better, and more secure? The security reference is derived from the following URLs: http://www.infoq.com/news/2010/09/oauth2-bad-for-web http://hueniverse.com/2010/09/oauth-2-0-without-signatures-is-bad-for-the-web/

    Read the article

  • Game 30% done on HTML5. Maybe it was a bad idea. Should I change to Unity3d? [on hold]

    - by Dokkat
    I'm creating a 3d game on HTML5. It's 30% complete and the hard part is already coded. The server is on node.js.Now I'm realizing that maybe it was not a wise choice. This is because I realized: Three.js still has many bugs. I don't see the same thing on every machine. Each browser, OS, can give different results. I'm afraid my clients will have a great stress installing my game properly. I have tons of sprites and models on my game. I wonder if my clients will have to load all them again everytime they want to play? I wonder if a Node.js server will be fast enough to handle it, and I'm afraid it won't be scalable. What would you advise me? Should I continue and finish the game on HTML5 or is it better to remake it on something else, like Unity3d for the client and (what?) for the server?

    Read the article

  • A more reliable and more flexible sp_MSforeachdb

    - by AaronBertrand
    I've complained about sp_MSforeachdb before. In part of my "Bad Habits to Kick" series in 2009-10, I described how I worked around its sporadic inability to actually process all of the databases on an instance: http://sqlblog.com/blogs/aaron_bertrand/archive/2010/02/08/bad-habits-to-kick-relying-on-undocumented-behavior.aspx I lumped this in a "Bad Habit" category of relying on undocumented behavior, since - while the procedure does have rampant usage - it is, in fact, both undocumented and unsupported....(read more)

    Read the article

  • Quit job for another but current employer doesn't want to lose me. Would it be a bad idea to stay?

    - by Confused
    So I've handed in my notice at my current job as I've been offered a job at another company. However, my current employer doesn't want to lose me and they want to know what I want to stay. I mostly enjoy working there so I'd be open to negiotiation. The new job was an unexpected opportunity that presented itself. Such things I'd be looking for are: Better computers for developers Opportunity to work from home occasionally Improved internet access (e.g. able to download software, no keyword blocking) Chance to work on other technologies than my primary (we do have projects on other technologies) Pay increase (though this isn't my primary motivation) I found out that some of these were already in progress when I handed in my notice :( Is it ever a good idea to remain at a company after you've resigned? What if they meet all my conditions and alter my contract accordingly? Will I burn my bridges at the new company (I've already told them I'd accept their offer)? Update: Thanks for the answers. Quite a mixed bag which was interesting. Anyway, just so you know, I've chosen to stay at my current company. So far, it definately feels like the right decision. Guess I won't know for a few months whether is was though.

    Read the article

  • Multiple Kickstarter campaigns. Good? Bad? Ugly?

    - by BerickCook
    I've been toying with the idea of doing a Kickstarter for my game to help fund some good artists to replace the placeholder graphics I currently have. Just a small goal of $2k or so. Regardless of whether the campaign is successful this time, would it be considered a faux pas to do another, larger kickstarter once the game is looking better? Would the rewards need to be the same, or could I offer better rewards at lower donation levels for the first one as an "early adopter" bonus?

    Read the article

  • Invalid command 'SSLRequireSSL',

    - by Bad Programmer
    An svn server that I managed crashed. The server is up and running again, but I can't manage to get svn running anymore. I followed the instructions listed here: http://mark.koli.ch/2010/03/howto-setting-up-your-own-svn-server-using-apache-and-mod-dav-svn.html Yet when I try to start apache using /etc/init.d/httpd start I get a [FAILED] message. There is no content in the error logs. Any suggestions?

    Read the article

  • How to solve programming problems using logic? [closed]

    - by md nth
    I know these principles: Define the constrains and operations,eg constrains are the rules that you cant pass and what you want determined by the end goal, operations are actions you can do, "choices" . Buy some time by solving easy and solvable piece. Halving the difficulty by dividing the project into small goals and blocks. The more blocks you create the more hinges you have. Analogies which means : using other code blocks, yours or from other programmers . which has problem similar to the current problem. Experiments not guessing by writing "predicted end" code, in other word creating a hypothesis, about what will happen if you do this or that. Use your tools first, don't begin with a unknown code first. By making small goals you ll not get frustrated. Start from smallest problem. Are there other principles?

    Read the article

  • Where do I learn about IP blocks and subnets? Or is there just a calculator that does it all for me?

    - by cwd
    Amazon's elastic compute tool (among others) requires the ip block format for their command: ec2-authorize websrv -P tcp -p 80 -s 205.192.0.0/16 I may be doing this wrong, but as far as I can tell I need to use the block format even for a single IP address. 1) So, how would I do that for this IP? 71.75.232.132 Several years ago I took a CCNA class, and I remember going over IPs and subnets, masks, broadcast addresses, class a/b/c networks, etc. However a lot seems to have changed since then - for example I don't think you can tell what "class" a network is in just by looking at it anymore - sometimes they could be multiple classes. 2) Anyhow, my second question is where do I go to get a refresher on all these things? 3) Or should I just be using ipcalc or an online calculator to do it all for me - and if so, which one?

    Read the article

  • How to create location blocks in nginx for a single file but have it follow the rules of another location block in addition to it's own?

    - by Ryan Detzel
    I have a location block for / that does all of my fastcgi stuff and it has a normal timeout of 10s. I want to be able to have different timesouts for certain files(/admin, sitemap.xml). Is there an easy way to do this without copying the entire location block for each location? location /admin{ fastcgi_read_timeout 5m; #also use the location info below. } location /sitemap.xml{ fastcgi_read_timeout 5m; #also use the location info below. } location / { fastcgi_pass 127.0.0.1:8014; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; }

    Read the article

  • Does a 3ware "ECC-ERROR" matter on a JBOD when I have ZFS?

    - by Stefan Lasiewski
    I have a FreeBSD 8.x machine running ZFS and with a 3ware 9690SA controller. The 3ware controller shows an ECC-ERROR with one of the disks: //host> /c0 show VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 279.39 GB SAS 0 - SEAGATE ST3300657SS p1 OK u0 279.39 GB SAS 1 - SEAGATE ST3300657SS p2 OK u1 931.51 GB SAS 2 - SEAGATE ST31000640SS p3 ECC-ERROR u2 931.51 GB SAS 3 - SEAGATE ST31000640SS p4 OK u3 931.51 GB SAS 4 - SEAGATE ST31000640SS /c0 show events shows no ECC errors in it's recent history. ZFS does not currently detect any errors. zpool status says No known data errors My question: Is this ECC-ERROR something that I need to be concerned about? According to the 3ware CLI 9.5.2 Manual, an ECC-ERROR means that the 3ware controller caught a read-error for one or more sectors on this drive. This sometimes occurs when a RAID array is recovering from a failed disk. I believe that ECC-ERRORS can also be detected when the 3ware Controller verifies each disk. None of the drives have failed and thus there was no drive rebuild, so I assume that 3ware discovered a bad sector when it ran it's weekly auto-verify scan of the disks. Is this a safe assumption? According to our logs, ZFS has not detected any bad sectors on this drive. ZFS can work around read errors -- if ZFS detects a bad sector on the drive, it will simply mark that sector as bad and never use it again. From the ZFS perspective one bad sector isn't a big deal, although it might indicate that the drive is starting to go bad.

    Read the article

  • std::map for storing static const Objects

    - by Sean M.
    I am making a game similar to Minecraft, and I am trying to fine a way to keep a map of Block objects sorted by their id. This is almost identical to the way that Minecraft does it, in that they declare a bunch of static final Block objects and initialize them, and then the constructor of each block puts a reference of that block into whatever the Java equivalent of a std::map is, so there is a central place to get ids and the Blocks with those ids. The problem is, that I am making my game in C++, and trying to do the exact same thing. In Block.h, I am declaring the Blocks like so: //Block.h public: static const Block Vacuum; static const Block Test; And in Block.cpp I am initializing them like so: //Block.cpp const Block Block::Vacuum = Block("Vacuum", 0, 0); const Block Block::Test = Block("Test", 1, 0); The block constructor looks like this: Block::Block(std::string name, uint16 id, uint8 tex) { //Check for repeat ids if (IdInUse(id)) { fprintf(stderr, "Block id %u is already in use!", (uint32)id); throw std::runtime_error("You cannot reuse block ids!"); } _id = id; //Check for repeat names if (NameInUse(name)) { fprintf(stderr, "Block name %s is already in use!", name); throw std::runtime_error("You cannot reuse block names!"); } _name = name; _tex = tex; //fprintf(stdout, "Using texture %u\n", _tex); _transparent = false; _solidity = 1.0f; idMap[id] = this; nameMap[name] = this; } And finally, the maps that I'm using to store references of Blocks in relation to their names and ids are declared as such: std::map<uint16, Block*> Block::idMap = std::map<uint16, Block*>(); //The map of block ids std::map<std::string, Block*> Block::nameMap = std::map<std::string, Block*>(); //The map of block names The problem comes when I try to get the Blocks in the maps using a method called const Block* GetBlock(uint16 id), where the last line is return idMap.at(id);. This line returns a Block with completely random values like _visibility = 0xcccc and such like that, found out through debugging. So my question is, is there something wrong with the blocks being declared as const obejcts, and then stored at pointers and accessed later on? The reason I cant store them as Block& is because that makes a copy of the Block when it is entered, so the block wouldn't have any of the attributes that could be set afterwards in the constructor of any child class, so I think I need to store them as a pointer. Any help is greatly appreciated, as I don't fully understand pointers yet. Just ask if you need to see any other parts of the code.

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >