Search Results

Search found 8770 results on 351 pages for 'sun zfs storage 7000 appl'.

Page 7/351 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Opensolaris, ZFS shares

    - by Random
    I've recently updated to OpenSolaris 2010.05 from 2009.06. Its currently set up to share one of the the ZFS zpools to the other machines on my network. However, only a single account is able to access these shares. My other accounts get refused when attempting to connect. File permissions are all correct - When logging into the machine directly with the other logins, I can see, and play with the files and directories as normal - Its just the share that gets refused. The interesting thing is, I could access the shares prior to upgrading. Nothing else was changed other than the straight upgrade. I am pretty sure there is no user specific sharing going on. What could be preventing others from accessing this zpool?

    Read the article

  • How to make Web Storage persistent in Cordova using JS?

    - by ett
    I have a small quiz app, which is a cross-platform mobile app, that I plan for it to run on Android, iOS, and WP8. I want to store a local highscore, where it will keep track how many points the user had, and if he/she does better than the already stored highscore, update the current highscore. Though, I want the highscore to be persistent, what I mean is, every time the user opens the app I want the last highscore to be present and it to be compared with the new quiz score. Meaning, I don't want the highscore to be deleted after each time the app is closed. I also want for the first time when the app is ran, the highscore to be set to 0, so obviously the first time the user finishes the quiz gets a new highscore. I have those codes so far: scoredb.js // Wait for device API libraries to load document.addEventListener("deviceready", initScoreDB, false); // Device APIs are available function initScoreDB() { window.localStorage.setItem("score", 0); highscore = window.localStorage.getItem("score"); } main.js var correct = 0; var highscore; // In between I have some code that keeps incrementing // correct variable for each correct answer. if (correct > highscore) { window.localStorage.setItem("score", correct); highscore = correct; } It does seem to work okay once the app is started. I did the quiz three times, in the simulator, and it keeps the score as it should. Though, each time I open the app, highscore is reseted to 0. I guess it is due to the fact that I call initScoreDB when the device is ready, and I initialize the score to 0 there and give that value to highscore. Can someone help me to initialize the score to 0 only when the app is ran for the first time, and all the other times to keep the latest highscore as the current highscore and compare it each time with the score that is achieved when the quiz is finished. If someone can help me, I would be glad.

    Read the article

  • Can I use Veritias Storage Manager to provide HA storage using server-local storage?

    - by Paul
    I have a need to provide an high-availability ftp/http file repository. Upload will happne to one server, but the uploaded file must be immediately visisble on all other servers I can handle the failover of the servers themeselves using load balancers. But in the event of failure of one server, the other servers must see the same contents of the repository. Normally, I'd use a SAN for this, but in this case the data centre standards do not allow SAN/external storage - all storage will be local to the servers. Cam I use Veritas Storage Manager (or any other product) to manage mirroring hte contents between servers in this way? Or does that require a SAN? I couldn't tell either way from a quick look at the data sheets etc.

    Read the article

  • Oracle Desktop Virtualization at HIMSS 2011

    - by chris.kawalek(at)oracle.com
    The HIMSS Conference is an extremely important industry trade show put on by The Healthcare Information and Management Systems Society. It's being held in Florida starting this Sunday, February 20th. Their slogan, "Linking people, potential, and progress" could be true of Oracle desktop virtualization as well! The Oracle desktop virtualization group has worked very closely with the Oracle healthcare business unit to have a large presence at this show, and I wanted to tell you a bit about what we're doing: - All Oracle demos are being done on Sun Ray Clients That's right, every demo pod in the large Oracle booth will have a Sun Ray Client with each demo tied to a smart card. Too many people at your demo station? Pop your card out and go to a different one. We'll also be demoing Oracle desktop virtualization at a dedicated demo station, too. This is great stuff! Find Oracle at booth #1651 Oracle's page about HIMSS - Focus Group - Caregiver Mobility with Oracle Sun Ray Clients and Desktop Virtualization Feb 22, 3:15-4:15 PM This focus group will be for customers interested in Oracle desktop virtualization. It's invitation only, but you can comment on this blog post and we can give you info on how to attend (your comment won't be made public). - Solution Session - Fast, Secure, Workflow Optimized: Inexpensive Access to Care Information is Possible Inside and Outside of the Hospital Feb 23, 4:15 PM Booth #685, Wireless and Mobility Theatre Oracle's Adam Workman will cover caregiver mobility and the benefits of Oracle desktop virtualization to healthcare organizations. - New healthcare solutions page on oracle.com We've created a page dedicated to content involving desktop virtualization and healthcare. This will be your onestop shop if looking for desktop virtualization and healthcare information. - New desktop virtualization and healthcare solution data sheet This document outlines how we define "Caregiver Mobility" and how Oracle products are used to facilitate quicker, more secure access to patient data. We'll have some more updates from the show next week. It looks like its going to be an exciting event! -Chris

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • How do I add a second disk to my zfs root pool

    - by ankimal
    I am trying to add a new disk to my zfs root pool. Here is my current config: zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0d0s0 ONLINE 0 0 0 errors: No known data errors bash-3.00# df -h Filesystem Size Used Avail Use% Mounted on rpool/ROOT/s10x_u7wos_08 311G 18G 293G 6% / swap 14G 384K 14G 1% /etc/svc/volatile /usr/lib/libc/libc_hwcap1.so.1 311G 18G 293G 6% /lib/libc.so.1 swap 14G 52K 14G 1% /tmp swap 14G 40K 14G 1% /var/run rpool/export 293G 19K 293G 1% /export rpool/export/home 430G 138G 293G 32% /export/home rpool 293G 36K 293G 1% /rpool # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 1. c2d0 <Hitachi- JK1181YAHL0YK-0001-16777216.> /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0 Disk 1 above is the new disk I need to attach to expand my root pool (give /export/home some extra space). If I try to attach my new disk to the pool # zpool attach -f rpool c0d0s0 c2d0s0 cannot attach c2d0s0 to c0d0s0: new device must be a single disk # uname -a SunOS dsol1 5.10 Generic_139556-08 i86pc i386 i86pc Solaris Any ideas?

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • Defines JEE 5 the handling of commit error using bean managed transactions?

    - by marabol
    I'm using glassfish 2.1 and 2.1.1. If I've a bean method annotated by @TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW). After doing some JPA stuff the commit fails in the afterCompletion-Phase of JTS. GlassFish logs this failure only. And the caller of this bean method has no chance to know something goes wrong. So I wonder, if there is any definition how a jee 5 server has to handle exceptions while commiting. I would expect any runtime exception. I'm using stateless beans. With SessionSynchronisation I could get the commit failue, if I use statefull beans. Is it possible to intercept, so I can throw an exception, that I've declared in my interface? This is the whole exception stacktrace: [#|2010-05-06T12:15:54.840+0000|WARNING|sun-appserver2.1|oracle.toplink.essentials.session.file:/C:/glassfish/domains/domain1/applications/j2ee-apps/my-ear-1.0.0-SNAPSHOT/my-jar-1.1.8_jar/-myPu.transaction|_ThreadID=25;_ThreadName=p: thread-pool-1; w: 15;_RequestID=67a475a1-25c3-4416-abea-0d159f715373;| java.lang.RuntimeException: Got exception during XAResource.end: oracle.jdbc.xa.OracleXAException at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.delistResource(J2EETransactionManagerOpt.java:224) at com.sun.enterprise.resource.ResourceManagerImpl.unregisterResource(ResourceManagerImpl.java:265) at com.sun.enterprise.resource.ResourceManagerImpl.delistResource(ResourceManagerImpl.java:223) at com.sun.enterprise.resource.PoolManagerImpl.resourceClosed(PoolManagerImpl.java:400) at com.sun.enterprise.resource.ConnectorAllocator$ConnectionListenerImpl.connectionClosed(ConnectorAllocator.java:72) at com.sun.gjc.spi.ManagedConnection.connectionClosed(ManagedConnection.java:639) at com.sun.gjc.spi.base.ConnectionHolder.close(ConnectionHolder.java:201) at com.sun.gjc.spi.jdbc40.ConnectionHolder40.close(ConnectionHolder40.java:519) at oracle.toplink.essentials.internal.databaseaccess.DatabaseAccessor.closeDatasourceConnection(DatabaseAccessor.java:394) at oracle.toplink.essentials.internal.databaseaccess.DatasourceAccessor.closeConnection(DatasourceAccessor.java:382) at oracle.toplink.essentials.internal.databaseaccess.DatabaseAccessor.closeConnection(DatabaseAccessor.java:417) at oracle.toplink.essentials.internal.databaseaccess.DatasourceAccessor.afterJTSTransaction(DatasourceAccessor.java:115) at oracle.toplink.essentials.threetier.ClientSession.afterTransaction(ClientSession.java:119) at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.afterTransaction(UnitOfWorkImpl.java:1841) at oracle.toplink.essentials.transaction.AbstractSynchronizationListener.afterCompletion(AbstractSynchronizationListener.java:170) at oracle.toplink.essentials.transaction.JTASynchronizationListener.afterCompletion(JTASynchronizationListener.java:102) at com.sun.jts.jta.SynchronizationImpl.after_completion(SynchronizationImpl.java:154) at com.sun.jts.CosTransactions.RegisteredSyncs.distributeAfter(RegisteredSyncs.java:210) at com.sun.jts.CosTransactions.TopCoordinator.afterCompletion(TopCoordinator.java:2585) at com.sun.jts.CosTransactions.CoordinatorTerm.commit(CoordinatorTerm.java:433) at com.sun.jts.CosTransactions.TerminatorImpl.commit(TerminatorImpl.java:250) at com.sun.jts.CosTransactions.CurrentImpl.commit(CurrentImpl.java:623) at com.sun.jts.jta.TransactionManagerImpl.commit(TransactionManagerImpl.java:309) at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.commit(J2EETransactionManagerImpl.java:1029) at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.commit(J2EETransactionManagerOpt.java:398) at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3817) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3610) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1379) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1316) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:205) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:127) at $Proxy127.myNewTxMethod(Unknown Source) at mypackage.MyBean2.myMethod(MyBean2.java:197) at mypackage.MyBean2.myMethod2(MyBean2.java:166) at mypackage.MyBean2.myMethod3(MyBean2.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1011) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:175) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:197) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:127) at $Proxy158.myMethod3(Unknown Source) at mypackage.MyBean3.myMethod4(MyBean3.java:94) at mypackage.MyBean3.onMessage(MyBean3.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.SecurityUtil$2.run(SecurityUtil.java:181) at java.security.AccessController.doPrivileged(Native Method) at com.sun.enterprise.security.application.EJBSecurityManager.doAsPrivileged(EJBSecurityManager.java:985) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:186) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.MessageBeanContainer.deliverMessage(MessageBeanContainer.java:1111) at com.sun.ejb.containers.MessageBeanListenerImpl.deliverMessage(MessageBeanListenerImpl.java:74) at com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:179) at $Proxy192.onMessage(Unknown Source) at com.sun.messaging.jms.ra.OnMessageRunner.run(OnMessageRunner.java:258) at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:76) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555) |#]

    Read the article

  • Installing sun-java6-jdk with apt-get on Ubuntu 10.04

    - by Adam S
    I have followed the instructions on numerous pages, such as this, which say to run the following commands: sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner" sudo apt-get update sudo apt-get install sun-java6-jdk However, when I do this I still get the following error: me@mycomputer:~$ sudo apt-get install sun-java6-jdk Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jdk is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jdk has no installation candidate I realize Java is available from many other sources, but for reasons that I can't get into here I must use this specific version. What can I do to get this installed?

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <jre-comments@java.sun.com> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • Sun T5120: Memory issue with 8GB DIMMs, "Unsupported memory configuration"

    - by watain
    I'm trying to upgrade RAM in a Sun T5120 server, replacing 2GB (Sun P/N: 501-7953-01) with 8GB DIMMs (Sun P/N: 511-1262-01). When bringing up the host system, I get the following errors on the ILOM: -> show faulty Target | Property | Value --------------------+------------------------+--------------------------------- /SP/faultmgmt/0 | fru | /SYS/MB /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU3 Forced fail faults/0 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:28 faults/1 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU2 Forced fail faults/1 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:29:13 faults/2 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU1 Forced fail faults/2 | | (IBIST) /SP/faultmgmt/0/ | timestamp | Dec 14 15:28:59 faults/3 | | /SP/faultmgmt/0/ | sp_detected_fault | /SYS/MB/CMP0/MCU0 Forced fail faults/3 | | (IBIST) /SP/faultmgmt/1 | fru | /SYS /SP/faultmgmt/1/ | timestamp | Dec 14 15:29:42 faults/0 | | /SP/faultmgmt/1/ | sp_detected_fault | Dec 14 15:29:42 ERROR: faults/0 | | Unsupported memory | | configuration As you can see, the only error message I get is "Unsupported memory configuration". Note that I'm absolutely sure that I placed in the DIMMs in the correct slots. Might this issue be related to the Voltage of the DIMMs? Any suggestions on how to trouble-shoot this issue? This issue seems to be similar to the one explained at "Inserted disabled" while upgrading Sun Sparc t5120 memory. However the given link http://docs.sun.com/source/820-4445-10/chapter1.html seems to point to an inexistent page...

    Read the article

  • Using a KVM with a Sun Ultra 45

    - by Kit Peters
    I have a friend with two machines: a Mac and a Sun Ultra45. He wants to use a single monitor (connected via DVI) and USB keyboard / mouse with both machines. Try as I might, the Sun box will not bring up a console if the KVM is connected. However, if I connect a monitor, keyboard, and mouse directly to the Sun box, it will bring up a console that I can then (IIRC) connect to the KVM. Is there any way I can make the Sun box always bring up a console, regardless of whether it "sees" a monitor, keyboard, and mouse?

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Disabling RAID feature on HP Smart Array P400

    - by Arie K
    I'm planning to use ZFS on my system (HP ML370 G5, Smart Array P400, 8 SAS disk). I want ZFS to manage all disks individually, so it can utilize better scheduling (i.e. I want to use software RAID feature in ZFS). The problem is, I can't find a way to disable RAID feature on the RAID controller. Right now, the controller aggregates all of the disks into one big RAID-5 volume. So ZFS can't see individual disk. Is there any way to acomplish this setup?

    Read the article

  • What does the crash mean? And why is my Ubuntu Blackbox is crashing how can i check deeply?

    - by YumYumYum
    My system was running for a while amount of 6 hour. Two times i loss remote access and it was not functioning anymore IP is gone etc etc. 3 time showing crash but i have no idea what and why. How to know what went wrong? $ last sun pts/0 d51a429c9.access Mon Mar 19 13:44 still logged in sun tty7 :0 Mon Mar 19 12:17 still logged in reboot system boot 2.6.38-8-generic Mon Mar 19 12:17 - 13:49 (01:31) sun pts/0 d51a429c9.access Mon Mar 19 10:05 - crash (02:12) sun tty7 :0 Mon Mar 19 10:00 - crash (02:16) reboot system boot 2.6.38-8-generic Mon Mar 19 10:00 - 13:49 (03:48) sun pts/0 d51a429c9.access Mon Mar 19 09:24 - down (00:35) sun tty7 :0 Mon Mar 19 09:20 - down (00:39) reboot system boot 2.6.38-8-generic Mon Mar 19 09:20 - 10:00 (00:39) sun pts/2 d51a429c9.access Sun Mar 18 18:04 - down (01:14) sun pts/1 d51a429c9.access Sun Mar 18 17:43 - down (01:35) sun pts/0 d51a429c9.access Sun Mar 18 15:07 - 18:47 (03:40) sun pts/1 d51a429c9.access Sun Mar 18 12:58 - 17:42 (04:43) sun pts/0 d51a429c9.access Sun Mar 18 10:21 - 15:06 (04:44) sun tty7 :0 Sun Mar 18 08:56 - down (10:22) reboot system boot 2.6.38-8-generic Sun Mar 18 08:56 - 19:19 (10:22) sun tty7 :0 Sat Mar 17 18:03 - down (14:51) reboot system boot 2.6.38-8-generic Sat Mar 17 18:03 - 08:55 (14:51) sun tty7 :0 Sat Mar 17 15:00 - down (01:38) reboot system boot 2.6.38-8-generic Sat Mar 17 15:00 - 16:39 (01:38) sun pts/0 d51a4297d.access Sat Mar 17 10:45 - 14:32 (03:46) sun tty7 :0 Fri Mar 16 18:46 - crash (20:14) reboot system boot 2.6.38-8-generic Fri Mar 16 18:46 - 16:39 (21:53) $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +27.8°C (crit = +100.0°C) temp2: +29.8°C (crit = +100.0°C)

    Read the article

  • Claiming the provisioned storage

    - by gita
    I created a ubuntu server vm with 64GB provisioned storage. I remember that I specified 30GB to be used for the vm. When I do df -h, I get Filesystem Size Used Avail Use% Mounted on /dev/mapper/analysis--db-root<br/> 28G 25G 904M 97% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 793M 228K 793M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 0 2.0G 0% /run/shm /dev/sda1 228M 45M 171M 21% /boot The disk is almost full, how can I use my other 30GB from the provisioned storage?

    Read the article

  • USB modeswitch to mass storage device

    - by Andrew
    I'm trying to install a Sierra AirCard 320U (branded as "Telstra USB 4G") into a VirtualBox Windows XP machine, on an Ubuntu 10.10 host. usb_modeswitch offers me a way to disable the "mass storage device" option, but I can't see a way to permanently re-enable it so it's usable by VirtualBox (device filters briefly detect it but then it disappears again). lsusb shows the device as: 0f3d:68aa Airprime, Incorporated When I first insert the device, it shows as 1199:0fff Sierra Wireless, Inc. Is there any way to re-enable the storage device so VirtualBox can see it?

    Read the article

  • Apache Tomcat Server failure

    - by Kenneth Ordona
    I'm trying to set up Apache Tomcat 6 with SSL and once I edited the server.xml file to include the following definitions the server started to fail as soon as I hit startup.bat: <-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -- < Connector protocol="org.apache.coyote.http11.Http11Protocol" port="8445" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="${user.home}/.tomcat" keystorePass="pnnlpw" clientAuth="false" sslProtocol="TLS"/ The logs that I have are as follows: Jul 05, 2012 1:52:15 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jdk1.7.0_05\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;. Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Jul 05, 2012 1:52:15 PM org.apache.tomcat.util.digester.Digester fatalError SEVERE: Parse Fatal Error at line 91 column 2: The content of elements must consist of well-formed character data or markup. org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1388) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.startOfMarkup(XMLDocumentFragmentScannerImpl.java:2565) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2663) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina load WARNING: Catalina.start using conf/server.xml: org.xml.sax.SAXParseException; systemId: file://C/tomcat6/conf/server.xml; lineNumber: 91; columnNumber: 2; The content of elements must consist of well-formed character data or markup. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1236) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:568) at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642) at org.apache.catalina.startup.Catalina.load(Catalina.java:524) at org.apache.catalina.startup.Catalina.start(Catalina.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jul 05, 2012 1:52:15 PM org.apache.catalina.startup.Catalina start SEVERE: Cannot start server. Server instance is not configured. Does anyone have an idea why this is happening? I believe it has to do with the configuration of my connector. I'm pretty new to this so any help would be much appreciated.

    Read the article

  • SAS Expanders vs Direct Attached (SAS)?

    - by jemmille
    I have a storage unit with 2 backplanes. One backplane holds 24 disks, one backplane holds 12 disks. Each backplane is independently connected to a SFF-8087 port (4 channel/12Gbit) to the raid card. Here is where my question really comes in. Can or how easily can a backplane be overloaded? All the disks in the machine are WD RE4 WD1003FBYX (black) drives that have average writes at 115MB/sec and average read of 125 MB/sec I know things would vary based on the raid or filesystem we put on top of that but it seems to be that a 24 disk backplane with only one SFF-8087 connector should be able to overload the bus to a point that might actually slow it down? Based on my math, if I had a RAID0 across all 24 disks and asked for a large file, I should, in theory should get 24*115 MB/sec wich translates to 22.08 GBit/sec of total throughput. Either I'm confused or this backplane is horribly designed, at least in a perfomance environment. I'm looking at switching to a model where each drive has it's own channel from the backplane (and new HBA's or raid card). EDIT: more details We have used both pure linux (centos), open solaris, software raid, hardware raid, EXT3/4, ZFS. Here are some examples using bonnie++ 4 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 194MB/s 19% 92MB/s 11% 200MB/s 8% 310/sec 194MB/s 19% 93MB/s 11% 201MB/s 8% 312/sec --------- ---- --------- ---- --------- ---- --------- 389MB/s 19% 186MB/s 11% 402MB/s 8% 311/sec 8 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 324MB/s 32% 164MB/s 19% 346MB/s 13% 466/sec 324MB/s 32% 164MB/s 19% 348MB/s 14% 465/sec --------- ---- --------- ---- --------- ---- --------- 648MB/s 32% 328MB/s 19% 694MB/s 13% 465/sec 12 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 377MB/s 38% 191MB/s 22% 429MB/s 17% 537/sec 376MB/s 38% 191MB/s 22% 427MB/s 17% 546/sec --------- ---- --------- ---- --------- ---- --------- 753MB/s 38% 382MB/s 22% 857MB/s 17% 541/sec Now 16 Disk RAID-0, it's gets interesting WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 359MB/s 34% 186MB/s 22% 407MB/s 18% 1397/sec 358MB/s 33% 186MB/s 22% 407MB/s 18% 1340/sec --------- ---- --------- ---- --------- ---- --------- 717MB/s 33% 373MB/s 22% 814MB/s 18% 1368/sec 20 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 371MB/s 37% 188MB/s 22% 450MB/s 19% 775/sec 370MB/s 37% 188MB/s 22% 447MB/s 19% 797/sec --------- ---- --------- ---- --------- ---- --------- 741MB/s 37% 376MB/s 22% 898MB/s 19% 786/sec 24 Disk RAID-1, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 347MB/s 34% 193MB/s 22% 447MB/s 19% 907/sec 347MB/s 34% 192MB/s 23% 446MB/s 19% 933/sec --------- ---- --------- ---- --------- ---- --------- 694MB/s 34% 386MB/s 22% 894MB/s 19% 920/sec 28 Disk RAID-0, ZFS 32 Disk RAID-0, ZFS 36 Disk RAID-0, ZFS More details: Here is the exact unit: http://www.supermicro.com/products/chassis/4U/847/SC847E1-R1400U.cfm

    Read the article

  • Two Sun Certification Exams To Retire August 1, 2010

    - by Paul Sorensen
    Effective August 1, 2010, Exam CX-310-400 ("Sun Certified Integrator for Identity Manager 7.1"), currently part of the "Sun Certified Integrator for Identity Manager 7.1" certification track, will be retired. We will also retire Exam CX-310-502 ("Sun Certified Java CAPS Integrator"), currently within the "Sun Certified Java CAPS Integrator" certification track. Both exams will remain available for registration and testing at Prometric Testing Centers through July 31, 2010.CREDENTIAL VALIDITYPlease note that that these credentials remain valid indefinitely for those holding the certifications. These retirements therefore have no effect on those who complete the certification requirements before August 1, 2010.QUICK LINKSRetiring Exams:Exam CX-310-400 "Sun Certified Integrator for Identity Manager 7.1"Exam CX-310-502 "Sun Certified Java CAPS Integrator" Certification Tracks:Sun Certified Integrator for Identity Manager 7.1Sun Certified Java CAPS IntegratorLearn more: Oracle Certification Retirements

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >