Search Results

Search found 3761 results on 151 pages for 'revision history'.

Page 102/151 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • NUnit vs Visual Studio 2010's MSTest?

    - by David White
    I realise that there are many older questions addressing the general question of NUnit v MSTest for versions of Visual Studio up to 2008 (such as this one). Microsoft have a history of getting things right in their 3rd version. For MSTest, that is VS2010. Have they done so with MSTest? Would you use it in a new project in preference to NUnit? My specific concerns: speed running tests within CruiseControl.NET (either commandline or MSBuild task) code coverage reports from CC.NET can you run MSTest tests in debug mode (We use ReSharper, so test-runners are not an issue for us. We have used NUnit for the last few years. We do not have TFS.)

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Spring Weblfow 2 - Double Submit

    - by John W.
    Hello All, I am investigating a possible issue with double submits and I am looking at the possibilty of a double submit from a webflow execution. I have read many times that webflow will handle double submits, there are plenty references here. However I then came across I a forum response on the spring source forums contradicting what I read saying, SWF synchronizes on the conversation. Only one request will be processed at a time per conversation. Take note that if you're using snapshots, then it's possible repeatedly clicking on the submit button will generate a second request. I would recommend setting history to invalidate or discard in the transition from your view-state. We do have snapshots enabled but the book notes that using snapshots actually allows to solve the double submits. Does anyone have any insight on this? Thanks.

    Read the article

  • Installing a source control without admin rights

    - by Simon T.
    I'm forced to use SourceSafe at my job. There is no way this is going to change. I would like to use another source control for my own need in parallel. I want to be able to keep an history of my modifications, branch easily and merge. I can install any application that doesn't requires admin rights. I cannot install Python or anything that integrates in File Explorer. I'm not much of a command line guy so a GUI is a must. I managed to install Mercurial but not TortoiseHG. There is a chance msysgit would install but the GUI isn't very good. Any suggestions?

    Read the article

  • How to persist a very abstract data type between sessions: PHP

    - by Greelmo
    I have an abstract data type that behaves much like stack. It represents a history of "graph objects" made by a particular user. Each "graph object" holds one or more "lines", a date range, keys, and a title. Each "line" holds a sql generator configured for a particular subset of data in my db. I would like for these "histories" to be available to users between their sessions. It will be in the form of a tab that reads something like "most recent graphs". What do you believe to be the best way to persist this type of data between sessions. This application could get rather large, so efficiency is a concern. Thanks in advance.

    Read the article

  • Create signed urls for CloudFront with Ruby

    - by wiseleyb
    History: I created a key and pem file on Amazon. I created a private bucket I created a public distribution and used origin id to connect to the private bucket: works I created a private distribution and connected it the same as #3 - now I get access denied: expected I'm having a really hard time generating a url that will work. I've been trying to follow the directions described here: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?PrivateContent.html This is what I've got so far... doesn't work though - still getting access denied: def url_safe(s) s.gsub('+','-').gsub('=','_').gsub('/','~').gsub(/\n/,'').gsub(' ','') end def policy_for_resource(resource, expires = Time.now + 1.hour) %({"Statement":[{"Resource":"#{resource}","Condition":{"DateLessThan":{"AWS:EpochTime":#{expires.to_i}}}}]}) end def signature_for_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) policy = url_safe(policy_for_resource(resource, expires)) key = OpenSSL::PKey::RSA.new(File.readlines(private_key_file_name).join("")) url_safe(Base64.encode64(key.sign(OpenSSL::Digest::SHA1.new, (policy)))) end def expiring_url_for_private_resource(resource, key_id, private_key_file_name, expires = Time.now + 1.hour) sig = signature_for_resource(resource, key_id, private_key_file_name, expires) "#{resource}?Expires=#{expires.to_i}&Signature=#{sig}&Key-Pair-Id=#{key_id}" end resource = "http://d27ss180g8tp83.cloudfront.net/iwantu.jpeg" key_id = "APKAIS6OBYQ253QOURZA" pk_file = "doc/pk-APKAIS6OBYQ253QOURZA.pem" puts expiring_url_for_private_resource(resource, key_id, pk_file) Can anyone tell me what I'm doing wrong here?

    Read the article

  • Modify cmd.exe properties using the command prompt

    - by CodexArcanum
    Isn't that nicely recursive? I've got a portable command prompt on my external drive, and it has a nice .bat file to configure some initial settings, but I'd like more! Here's what I know how to set from .bat: Colors = (color XY) where x and y are hex digits for the predefined colors Prompt = (prompt $p$g) sets the prompt to "C:\etc\etc " the default prompt Title = (title "text") sets the window title to "text" Screen Size = (mode con: cols=XX lines=YY) sets the columns and lines size of the window Path = (SET PATH=%~d0\bin;%PATH%) sets up local path to my tools and appends the computer's path So that's all great. But there are a few settings I can't seem to set from the bat. Like, how would I set these up wihtout using the Properties dialogue: Buffer = not screen size, but the buffer Options like quick edit mode and autocomplete Popup colors Font. And can you use a font on the portable drive, or must it be installed to work? Command history options

    Read the article

  • Dataset field DBNull -> int?

    - by BobClegg
    SQLServer int field. Value sometimes null. DataAdapter fills dataset OK and can display data in DatagridView OK. When trying to retrieve the data programmatically from the dataset the Dataset field retrieval code throws a StronglyTypedException error. [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public int curr_reading { get { try { return ((int)(this[this.tableHistory.curr_readingColumn])); } catch (global::System.InvalidCastException e) { throw new global::System.Data.StrongTypingException("The value for column \'curr_reading\' in table \'History\' is DBNull.", e); } Got past this by checking for DBNull in the get accessor and returning null but... When the dataset structure is modified (Still developing) my changes (unsurprisingly) are gone. What is the best way to handle this situation? It seems I am stuck with dealing with it at the dataset level. Is there some sort of attribute that can tell the auto code generator to leave the changes in place?

    Read the article

  • Difference between Document-oriented-DB and Bigtable clones

    - by chen
    We are looking for a suitable storage engine for our weblog history data. We looked at Bigtable's paper and understand it is suitable to us well. However, I also understand that Document-oriented-DB such as MongoDB seems to provide a little more powerful schema power -- i.e, it can model our data as well. I wonder how nowadays ppl choose a scalable NoSQL DB --- I read enough articles like "we looked at A, B and C, and we decided to use C". But I'd like to see some benchmark number. What I am saying is that if MongoDB and the like can provide same level of performance as Bigtable clones, why don't web companies choose it (preparing to deal with various potentially more complex data problem)? Thanks, By the way, I read an article (which convinced me at the moment) saying Cassandra does not fit the M/R operation, any comments?

    Read the article

  • Interrupted system call during "hg convert"

    - by Aaron Digulla
    When I run "hg convert" to convert a Subversion repository to Mercurial, I get this error: fetching revision log for "/trunk" from 1538 to 0 run hg sink post-conversion action Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 46, in _runcatch return _dispatch(ui, args) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 454, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 324, in runcommand ret = _runcommand(ui, options, cmd, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 505, in _runcommand return checkargs() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 459, in checkargs return cmdfunc() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 453, in <lambda> d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/usr/lib/pymodules/python2.6/mercurial/util.py", line 386, in check return func(*args, **kwargs) File "/usr/lib/pymodules/python2.6/hgext/convert/__init__.py", line 229, in convert return convcmd.convert(ui, src, dest, revmapfile, **opts) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 398, in convert c.convert(sortmode) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 312, in convert parents = self.walktree(heads) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 109, in walktree commit = self.cachecommit(n) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 267, in cachecommit commit = self.source.getcommit(rev) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 433, in getcommit self._fetch_revisions(revnum, stop) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 814, in _fetch_revisions for entry in stream: File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 122, in __iter__ entry = pickle.load(self._stdout) IOError: [Errno 4] Interrupted system call abort: Interrupted system call Apparently, it is possible to restart a read on EINTR but how would I do that with pickle.load()? Also I wonder where that signal comes from? I suspect it's SIGCHILD but shouldn't popen() handle that?

    Read the article

  • Rebuild Apple RAID set

    - by Clinton Blackmore
    We have a Mac Pro tower with an Apple RAID card in it using third party drives. When one drive failed, we replaced it and the RAID 5 set was nearly done rebuilding when the computer was rebooted. It did not come back up. We are now booting up off of a different internal volume, and have three (third-party) drives of identical spec (including revision and firmware) in the box. One of the drives is a global spare; the other two are recognized as belong to a RAID set but are in "Roaming" mode. The intention is to recreate the three-drive RAID set using the data on the two drives that are good. When we tell the system to create a RAID 5 using the three drives, it tells us that it'll create a RAID set but everything will be lost. There are no obvious options to rebuild a RAID using the two good drives and incorporating the third drive in Apple's RAID Utility, and we've looked through the options for the raidutil command. Fortunately, all important data is backed up, and we can rebuild from scratch, but, is there any way to make the RAIDset work again?

    Read the article

  • How would you audit ASP.NET Membership tables, while recording what user made the changes?

    - by Pete
    Using a trigger-based approach to audit logging, I am recording the history of changes made to tables in the database. The approach I'm using (with a static sql server login) to record which user made the change involves running a stored procedure at the outset of each database connection. The triggers use this username when recording the audit rows. (The triggers are provided by the product OmniAudit.) However, the ASP.NET Membership tables are accessed primarily through the Membership API. I need to pass in the current user's identity when the Membership API opens its database connection. I tried subclassing MembershipProvider but I cannot access the underlying database connection. It seems like this would be a common problem. Does anyone know of any hooks we can access when the ASP.NET Membership makes its database connection?

    Read the article

  • Subversion roadmap

    - by gbjbaanb
    Recently there was a post to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. As a result, I'm posting this to elicit some suggestions and contributions from the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. On the post, several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates So given all the above, what features in subversion, or missing from subversion, do you think could be improved or added?

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • How to migrate project from RCS to CVS?

    - by Norman Ramsey
    I have a 20-year-old project that I would like to migrate from RCS to git, without losing the history. All web pages suggest that the One True Path is through CVS. But after an hour of Googling and trying different scripts, I have yet to find anything that successfully converts my RCS project tree to CVS. I'm hoping the good people at Stackoverflow will know what actually works, as opposed to what is claimed to work and doesn't. (I searched Stackoverflow using both the native SO search and a Google search, but if there's a helpful answer in the database, I missed it.) Things that don't work that I still remember: The rcs-to-cvs script that ships in the contrib directory of the CVS sources The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export The rcs2cvs script found in a document called "CVS-RCS- HOW-TO Document for Linux"

    Read the article

  • How to upade Child grid in asp.net using LINQ

    - by Raj Kumar
    Hi I have an asp.net page where i am using LINQdatasource to bind grid. Now whenever, if some one changes something in grid I want to update a history table. which is also shown as child grid for each row Let say I have a grid with two column Name and Age. it also has a child row with column field and datetime. so when ever if some one changes something in Name or Age column and saves it. A new row is inserted in child row with the name of field changed and date time when it was changed

    Read the article

  • Mac OS X bash prompt bug?

    - by Memo
    I am trying to set my bash prompt to display the time and current directory in bold: export PS1="\[\e[1m\][\A] \w \$ \[\e[0m\]" This does apparently work, but when I use the command history (ctrl-r), after finding the command I was searching for and pressing enter, this line is not displayed correctly. Here is an example: [21:58] ~/Wyona/svn-repos/zwischengas $ (reverse-i-search)`ta': tail -F logs/log4j-cnode1.log becomes, after pressing enter: [21:58] ~/Wyona/svn-repos/zwischengas $ -F logs/log4j-cnode1.log Of course, this is not "really" a problem, since the command does work correctly, but it is still annoying. Does anybody know why this happens? And, more importantly, how to prevent/fix it? Thanks, Memo

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • Copy from CDROM is very slow in Ubuntu

    - by ???
    I'm using the command to copy CDROM image: # dd if=/dev/sr0 of=./maverick.iso But it's very slow, at about 350k bytes/sec. I've searched the google, and try the command # hdparm -vi /dev/sr0 /dev/sr0: HDIO_DRIVE_CMD(identify) failed: Bad address IO_support = 1 (32-bit) readonly = 0 (off) readahead = 256 (on) HDIO_GETGEO failed: Inappropriate ioctl for device Model=DVD-ROM UJDA775, FwRev=DA03, SerialNo= Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic } RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=0 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0 IORDY=yes, tPIO={min:180,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: sdma0 sdma1 sdma2 mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 *udma2 AdvancedPM=no Drive conforms to: ATA/ATAPI-5 T13 1321D revision 3: ATA/ATAPI-1,2,3,4,5 * signifies the current active mode Seems like DMA is already on. And a device test gives: # hdparm -t /dev/sr0 /dev/sr0: Timing buffered disk reads: 2 MB in 6.58 seconds = 311.10 kB/sec # sudo hdparm -tT /dev/sr0 /dev/sr0: Timing cached reads: 2 MB in 2.69 seconds = 760.96 kB/sec Timing buffered disk reads: m 4 MB in 5.19 seconds = 789.09 kB/sec The CD-ROM device and disc should be okay because I can copy it very fast in Windows, using UltraISO utility. So I guess there is something not configured right in Ubuntu, is it?

    Read the article

  • In Clojure - How do I access keys in a vector of structs

    - by Nick
    I have the following vector of structs: (defstruct #^{:doc "Basic structure for book information."} book :title :authors :price) (def #^{:doc "The top ten Amazon best sellers on 16 Mar 2010."} best-sellers [(struct book "The Big Short" ["Michael Lewis"] 15.09) (struct book "The Help" ["Kathryn Stockett"] 9.50) (struct book "Change Your Prain, Change Your Body" ["Daniel G. Amen M.D."] 14.29) (struct book "Food Rules" ["Michael Pollan"] 5.00) (struct book "Courage and Consequence" ["Karl Rove"] 16.50) (struct book "A Patriot's History of the United States" ["Larry Schweikart","Michael Allen"] 12.00) (struct book "The 48 Laws of Power" ["Robert Greene"] 11.00) (struct book "The Five Thousand Year Leap" ["W. Cleon Skousen","James Michael Pratt","Carlos L Packard","Evan Frederickson"] 10.97) (struct book "Chelsea Chelsea Bang Bang" ["Chelsea Handler"] 14.03) (struct book "The Kind Diet" ["Alicia Silverstone","Neal D. Barnard M.D."] 16.00)]) I would like to sum the prices of all the books in the vector. What I have is the following: (defn get-price "Same as print-book but handling multiple authors on a single book" [ {:keys [title authors price]} ] price) Then I: (reduce + (map get-price best-sellers)) Is there a way of doing this without mapping the "get-price" function over the vector? Or is there an idiomatic way of approaching this problem?

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • What are the Worst Software Project Failures Ever?

    - by Warren P
    Is there a good list of "worst software project failures ever" in the history of software development? For example in Canada a "gun registry" project spent around two billion dollars. (http://en.wikipedia.org/wiki/Gun_registry). This is of course, insane, even if the final product "sort of worked". I have heard of an FBI Case file system which there have been several attempts to rewrite, all of them so far, failures. There is a book on the subject (Software Runaways). There doesn't seem to be be a software "boondoggle" list or "fiasco" list on Wikipedia that I can see. (Update: Therac-25 would be the 'winner' of this question, except that I was internally thinking more of Software projects that had as their deliverable, mainly software, as opposed to firmware projects like Therac-25, where the hardware and firmware together are capable of killing people. In terms of pure software monetary debacles, which was my intended question, there are several contenders.)

    Read the article

  • Can someone explain the ivy.xml dependency's conf attribute?

    - by tieTYT
    I can't find any thorough explanation of the ivy dependency tag's conf attribute: <dependency org="hibernate" name="hibernate" rev="3.1.3" conf="runtime, standalone -> runtime(*)"/> See that conf attribute? I can't find any explanation (that I can understand) about the right hand side of the - symbol. PLEASE keep in mind I don't know the first thing about maven so please explain this attribute with that consideration. Yes, I've already looked at this: http://ant.apache.org/ivy/history/latest-release/ivyfile/dependency.html Thanks, Dan

    Read the article

  • FileInputStream negative skip

    - by Peter Štibraný
    I'm trying to find more about history of FileInputStream.skip(negative) operation. According to InputStream documentation: If n is negative, no bytes are skipped. It seems that implementation of FileInputStream from Sun used to throw IOException instead, which is now also documented in Javadoc: If n is negative, an IOException is thrown, even though the skip method of the InputStream superclass does nothing in this case. I just tried that, and found that FileInputStream.skip(-10) did in fact return -10! It didn't threw exception, it didn't even return 0, it returned -10. (I've tried with Java 1.5.0_22 from Sun, and Java 1.6.0_18 from Sun). Is this a known bug? Why hasn't it been fixed, or why documentation is kept the way it is? Can someone point me to some discussion about this issue? I can't find anything.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >