Search Results

Search found 1246 results on 50 pages for 'compression'.

Page 39/50 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • MPMoviePlayerController iPhone 3G & .mov file format

    - by Matt
    Just posting a question to the world... My app is currently setup to play video files. The app is setup to play .mov files and when tested on a 3GS, iPhone 4, and an iPad, it works great. But when playing on an iPhone 3G, the file does not play. Is this based on different compression standards that the 3G can't handle? I liked the .mov extension as it was compressed nicely to stream to the device. I have now converted the video files to .m4v so it will play on the 3G, but the file is now 3 times the size. Thanks in advance for any answers I get!!

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • Sharepoint Web performance optimization

    - by hertzel
    We are running on SSL on following server topology: 1 ISA (SSL Terminate/cache/proxy+AD authentication) 1 Sharepoint 1 IBM DB2 Database as enterprise/corporate DB 1 MS SQL Server as local DB We have recently optimized the caching, compression, minification, and other ASP.net best practices such as viewstate and cookie sizes, minimizing round trips, parallel connections/domain sharding and a lot more.... Now we are not convinced that the we are in an optimized position as the network resources i.e. bandwidth and especially latency are out of our control!! The client/browser to server/sharepoint is trans-Atlantic i.e. (ASIA, USA, EUROPE). As of my understanding the only ways to improve the network (latency) are: - TCP/SSL optimization - hardware/software? - CDNs - cloud or our own ? Your opinion and insights would be much appreciated Best regards Hertzel

    Read the article

  • Why does my git push hang after successfully pushing?

    - by John
    On a newly set up ssh git repo, whenever I push, I get normal output like this: ? git push Counting objects: 15, done. Delta compression using up to 4 threads. Compressing objects: 100% (9/9), done. Writing objects: 100% (9/9), 989 bytes, done. Total 9 (delta 7), reused 0 (delta 0) It happens very quickly, and the changes are immediately available on the server repo. But the output hangs there for about a minute, and then finishes with: To [email protected]:baz.git c8c391c..1de5e80 branch_name -> branch_name If I control-c before it finishes, everything seems to continue to be normal and healthy, locally and remotely. What is it doing while hanging? Is something configured incorrectly on the server side?

    Read the article

  • How Can I Determine if HTTP Requests/Responses are compressed in IE7?

    - by DTS
    I'm trying to use Fiddler (v2.2.2.0) to see if HTTP traffic through IE7 is being compressed. I'm not seeing Accept-Encoding or Content-Encoding request/response headers being sent/returned and I do not need to decode the response data once it's arrived, which leads me to believe that the responses are NOT coming back compressed. However, when making the same requests using FireFox 3.5.7, I could see through FireBug that FF was sending Accept-Encoding and YSlow at least thought my data was coming back compressed. A comment in this question: http://stackoverflow.com/questions/897989/using-fiddler-to-check-iis-compression suggested that a proxy server may be to blame for stripping out headers and decompressing the content for security reasons. I am using Verizon FIOS for my broadband at home and am now wondering if Verizon is proxying my HTTP traffic? In short, how can I positively confirm/deny that responses are coming back compressed through IE? Thanks.

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Video

    - by pinaldave
    Today I remember one of my older cartoon years ago created for Indexing and Performance. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. DBA and Developers A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analysing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behaviour and designing our applications appropriately will make sure the application is performed to its highest potential. Video Learning Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. We decided to build a course which just addresses the practical aspects of the performance. In this course, we explored some of these indexing fundamentals and we elaborated on how SQL Server goes about using indexes.  At the end of this course of you will know the basic structure of indexes, practical insights into implementation, and maintenance tips and tricks revolving around indexes.  Finally, we will introduce SQL Server 2012 column store indexes.  We have refrained from discussing internal storage structure of the indexes but have taken a more practical, demo-oriented approach to explain these core concepts. Course Outline Here are salient topics of the course. We have explained every single concept along with a practical demonstration. Additionally shared our personal scripts along with the same. Introduction Fundamentals of Indexing Index Fundamentals Index Fundamentals – Visual Representation Practical Indexing Implementation Techniques Primary Key Over Indexing Duplicate Index Clustered Index Unique Index Included Columns Filtered Index Disabled Index Index Maintenance and Defragmentation Introduction to Columnstore Index Indexing Practical Performance Tips and Tricks Index and Page Types Index and Non Deterministic Columns Index and SET Values Importance of Clustered Index Effect of Compression and Fillfactor Index and Functions Dynamic Management Views (DMV) – Fillfactor Table Scan, Index Scan and Index Seek Index and Order of Columns Final Checklist: Index and Performance Well, we believe we have done our part, now waiting for your comments and feedback. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • SQL SERVER – What the Business Says Is Not What the Business Wants

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Steve Jones. Steve raised a very interesting question; every DBA and Database Developer has already faced this situation. When I read the topic, I felt that I can write several different examples here. Today, I will cover this scenario, which seems quite amusing. Shrinking Database Earlier this year, I was working on SQL Server Performance Tuning consultancy; I had faced very interesting situation. No matter how much I attempt to reduce the fragmentation, I always end up with heavy fragmentation on the server. After careful research, I figured out that one of the jobs was continuously Shrinking the Database – which is a very bad practice. I have blogged about my experience over here SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server. I removed the incorrect shrinking process right away; once it was removed, everything continued working as it should be. After a couple of days, I learned that one of their DBAs had put back the same DBCC process. I requested the Senior DBA to find out what is going on and he came up with the following reason: “Business Requirement.” I cannot believe this! Now, it was time for me to go deep into the subject. Moreover, it had become necessary to understand the need. After talking to the concerned people here, I understood what they needed. Please read the exact business need in their own language. The Shrinking “Business Need” “We shrink the database because if we take backup after shrinking the database, the size of the same is smaller. Once we take backup, we have to send it to our remote location site. Our business requirement is that we need to always make sure that the file is smallest when we transfer it to remote server.” The backup is not affected in any way if you shrink the database or not. The size of backup will be the same. After a couple of the tests, they agreed with me. Shrinking will create performance issues for the same as it will introduce heavy fragmentation in the database. The Real Solution The real business need was that they needed the smallest possible backup file. We finally implemented a quick solution which they are still using to date. The solution was compressed backup. I have written about this subject in detail few years before SQL SERVER – 2008 – Introduction to New Feature of Backup Compression. Compressed backup not only creates a small filesize but also increases the speed of the database as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Don&rsquo;t Forget! In-Memory Databases are Hot

    - by andrewbrust
    If you’re left scratching your head over SAP’s intention to acquire Sybase for almost $6 million, you’re not alone.  Despite Sybase’s 1990s reign as the supreme database standard in certain sectors (including Wall Street), the company’s flagship product has certainly fallen from grace.  Why would SAP pay a greater than 50% premium over Sybase’s closing price on the day of the announcement just to acquire a relational database which is firmly stuck in maintenance mode? Well there’s more to Sybase than the relational database product.  Take, for example, its mobile application platform.  It hit Gartner’s “Leaders’ Quadrant” in January of last year, and SAP needs a good mobile play.  Beyond the platform itself, Sybase has a slew of mobile services; click this link to look them over. There’s a second major asset that Sybase has though, and I wonder if it figured prominently into SAP’s bid: Sybase IQ.  Sybase IQ is a columnar database.  Columnar databases place values from a given database column contiguously, unlike conventional relational databases, which store all of a row’s data in close proximity.  Storing column values together works well in aggregation reporting scenarios, because the figures to be aggregated can be scanned in one efficient step.  It also makes for high rates of compression because values from a single column tend to be close to each other in magnitude and may contain long sequences of repeating values.  Highly compressible databases use much less disk storage and can be largely or wholly loaded into memory, resulting in lighting fast query performance.  For an ERP company like SAP, with its own legacy BI platform (SAP BW) and the entire range of Business Objects and Crystal Reports BI products (which it acquired in 2007) query performance is extremely important. And it’s a competitive necessity too.  QlikTech has built an entire company on a columnar, in-memory BI product (QlikView).  So too has startup company Vertica.  IBM’s TM1 product has been doing in-memory OLAP for years.  And guess who else has the in-memory religion?  Microsoft does, in the form of its new PowerPivot product.  I expect the technology in PowerPivot to become strategic to the full-blown SQL Server Analysis Services product and the entire Microsoft BI stack.  I sure don’t blame SAP for jumping on the in-memory bandwagon, if indeed the Sybase acquisition is, at least in part, motivated by that. It will be interesting to watch and see what SAP does with Sybase’s product line-up (assuming the acquisition closes), including the core database, the mobile platform, IQ, and even tools like PowerBuilder.  It is also fascinating to watch columnar’s encroachment on relational.  Perhaps this acquisition will be columnar’s tipping point and people will no longer see it as a fad.  Are you listening Larry Ellison?

    Read the article

  • Getting Started With nServiceBus on VAN Mar 31

    - by van
    Topic: nServiceBus is mature and powerful open source framework that enables to design robust, scalable, message-based, service-oriented architectures. Latest improvements in the configuration API enables developers to quickly get started and build a working simple system that uses messaging infrastructure. The goal of this session is to give a jump start with the framework, introduce basic concepts such as message handlers, Sagas, Pub/Sub, Generic Host and also create a working demo application that uses publish/subscribe messaging. The content of the session is addressed to developers that are interested in learning how to get started using nServiceBus in order to design and build distributed systems. Bio: Bernard Kowalski is currently a Software Developer at Microdesk, one of Autodesk's leading partners in providing variety of Geospatial and Computer-Aided Design solutions. Bernard has experience developing .NET framework-based applications utilizing Windows Forms, Windows Services, ASP.NET MVC, and Web services. In a recent project, Bernard architected and implemented a distributed system based on SOA principles using an open source implementation of an Enterprise Service Bus. Bernard develops software with Agile patterns and practices using Domain Driven Design combined with TDD (Test Driven Development). He is familiar with all of the following APIs: Autodesk Vault/Product Stream API, AutoCAD ActiveX/VBA/.NET API, AutoCAD Mechanical API, Autodesk Inventor API, Autodesk MapGuide Enterprise. Prior to joining Microdesk, Bernard worked as a researcher and teacher at the University of Science and Technology in Krakow, Poland where he was awarded with a PhD in Computer Methods in Materials Science. He also participated in research projects where he developed applications for analysis of hot compression test results using advanced optimization techniques. He also developed Finite Element Method-based programs for thermal and stress analysis using C++ and FORTRAN. Bernard is a member of the Domain Driven Design and ALT.NET user groups in NYC. Virtual ALT.NET (VAN) is the online gathering place of the ALT.NET community. Through conversations, presentations, pair programming and dojos, we strive to improve, explore, and challenge the way we create software. Using net conferencing technology such as Skype and LiveMeeting, we hold regular meetings, open to anyone, usually taking the form of a presentation or an Open Space Technology-style conversation. Please see the Calendar(http://www.virtualaltnet.com/Home/Calendar) to find a VAN group that meets at a time convenient to you, and feel welcome to join a meeting. Past sessions can be found on the Recording page. To stay informed about VAN activities, you can subscribe to the Virtual ALT.NET Google Group and follow the Virtual ALT.NET blog. Times below are Central Standard Time Start Time: Wed, Mar 31, 2010 8:00 PM UTC/GMT -5 hours End Time: Wed, Mar 31, 2010 10:00 PM UTC/GMT -5 hours Attendee URL: http://www.virtualaltnet.com/van Zach Young http://www.virtualaltnet.com

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • Oracle Optimized Solutions at Oracle OpenWorld 2012

    - by ferhatSF
    Have you registered for Oracle OpenWorld 2012 in San Francisco from September 30 to October 4? Visit the Oracle OpenWorld 2012 site today for registration and more information. Come join us to hear how Oracle Optimized Solutions can help you save money, reduce integration risks, and improve user productivity. Oracle Optimized Solutions are designed, pre-tested, tuned and fully documented architectures for optimal performance and availability. They provide written guidelines to help size, configure, purchase and deploy enterprise solutions that address common IT problems. Built with flexibility in mind, Oracle Optimized Solutions can be deployed as complete solutions or easily tailored to meet your specific needs - they are proven to save money, reduce integration risks and improve user productivity. Here is a preview of the planned Oracle OpenWorld sessions(*) on Oracle Optimized Solutions. October 1, 2012 Monday Time Session ID Title Location 12:15 PM CON7916 Accelerate Oracle E-Business Suite Deployment with SPARC SuperCluster Moscone West - 2001 03:15 PM GEN9691 General Session: Accelerate Your Business with the Oracle Hardware Advantage Moscone North - Hall D 04:45 PM CON4821 Building a Flexible Enterprise Cloud Infrastructure on Oracle SPARC Systems Moscone West - 2001 October 2, 2012 Tuesday Time Session ID Title Location 10:15 AM CON4561 Backup-and-Recovery Best Practices with Oracle Engineered Systems Products Moscone South - 252 11:45 AM CON3851 Optimizing JD Edwards EnterpriseOne on SPARC T4 Servers for Best Performance Moscone West - 2000 01:15 PM GEN11472 General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 01:15 PM CON4600 Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization Moscone South - 252 05:00 PM CON9465 Next-Generation Directory: Oracle Unified Directory Moscone West - 3008 05:00 PM CON4088 Accelerate Your SAP Landscape with the Oracle SPARC SuperCluster Moscone West - 2001 05:00 PM CON7743 High-Performance Security for Oracle Applications Using SPARC T4 Systems Moscone West - 2000 05:00 PM CON3857 Archive Strategies for 100 Percent Data Availability Moscone South - 270 October 3, 2012 Wednesday Time Session ID Title Location 10:15 AM CON6528 Configure Oracle Hybrid Columnar Compression to Optimize Query Database Performance up to 10x Moscone South - 252 11:45 AM CON2590 Breakthrough in Private Cloud Management on SPARC T-Series Servers Moscone South - 270 01:15 PM CON4289 Oracle Optimized Solution for Siebel CRM at ACCOR Moscone West - 2000 05:00 PM CON7570 Improve PeopleSoft HCM Performance and Reliability with SPARC SuperCluster Moscone South - 252 * Schedule subject to change In addition, there will be Oracle Optimized Solutions Hands-On-Labs sessions planned. Please enroll ahead of time as space is limited: Oracle Optimized Solutions: Hands on Labs in Oracle OpenWorld Place: Marriott Marquis - Salon 14/15 Date and Time Session ID Title Monday October 1, 2012 01:45 PM HOL9868 Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c Monday October 1, 2012 03:15 PM HOL9907 Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Wednesday October 3, 2012 05:00 PM HOL9870 x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Thursday October 4, 2012 11:15 AM HOL9869 0 to Database Backup and Recovery in 60 Minutes Oracle Optimized Solutions executives and experts will also be at hand for discussions and follow ups. And don’t forget to catch live demonstrations of our complete Oracle Optimized Solutions while at Oracle OpenWorld 2012 in San Francisco. We recommend the use of the Schedule Builder tool to plan your visit to the conference and for pre-enrollment in sessions of your interest. We hope to see you there!

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • After 10 Years, MySQL Still the Right Choice for ScienceLogic's "Best Network Monitoring System on the Planet"

    - by Rebecca Hansen
    ScienceLogic has a pretty fantastic network monitoring appliance.  So good in fact that InfoWorld gave it their "2013 Best Network Monitoring System on the Planet" award.  Inside their "ultraflexible, ultrascalable, carrier-grade" enterprise appliance, ScienceLogic relies on MySQL and has since their start in 2003.  Check out some of the things they've been able to do with MySQL and their reasons for continuing to use MySQL in these highlights from our new MySQL ScienceLogic case study. Science Logic's larger customers use their appliance to monitor and manage  20,000+ devices, each of which generates a steady stream of data and a workload that is 85% write. On a large system, the MySQL database: Averages 8,000 queries every second or about 1 billion queries a day Can reach 175,000 tables and up to 20 million rows in a single table Is 2 terabytes on average and up to 6 terabytes "We told our customers they could add more and more devices. With MySQL, we haven't had any problems. When our customers have problems, we get calls. Not getting calls is a huge benefit." Matt Luebke, ScienceLogic Chief Software Architect.? ScienceLogic was approached by a number of Big Data / NoSQL vendors, but decided against using a NoSQL-only solution. Said Matt, "There are times when you really need SQL. NoSQL can't show me the top 10 users of CPU, or show me the bottom ten consumer of hard disk. That's why we weren't interested in changing and why we are very interested in MySQL 5.6. It's great that it can do relational and key-value using memcached." The ScienceLogic team is very cautious about putting only very stable technology into their product, and according to Matt, MySQL has been very stable: "We've been using MySQL for 10 years and we have never had any reliability problems. Ever." ScienceLogic now uses SSDs for their write-intensive appliance and that change alone has helped them achieve a 5x performance increase. Learn more>> ScienceLogic MySQL Case Study MySQL 5.6 InnoDB Compression options for better SSD performance Tuning MySQL 5.6 for Great Product Performance - on demand webinar Developer and DBA Guide to MySQL 5.6 white paper Guide to MySQL and NoSQL: The Best of Both Worlds white paper

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????·???? ?????:??????·?????? ~???????????Platinum???????????~ Pickup!:????????????????????? ???????|??????????|???????? ????????:?Oracle DB?????????????????????Windows?VMware?? ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? View RSS feed ????? ????? Oracle?????????????????????????!???????????? View RSS feed ????? ???? ????????? Oracle Database ?????? ??????? ?????????(????????, ???, etc) ????????(???, REDO, ????????, etc) ????·????????????????? ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Amazon EC2|Microsoft Excel MSFC/MSCS(Microsoft Cluster Service) Exadata|Database Firewall SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager ????????????: Oracle Application Testing Suite Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) Oracle Direct Seminar(?????????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! View RSS feed ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN????? ?????? ?????? ???????? ???? ???????

    Read the article

  • SSL23_WRITE:ssl handshake failure:s23_lib.c:177

    - by Armin
    When attempting to connect to an xmpp server over SSL, openssl fails with the following error: 3071833836:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177 I believe that the server uses the RC4-MD5 cipher, here is the full output: [root@localhost ~]# openssl s_client -connect 184.106.52.124:5222 -cipher RC4-MD5 CONNECTED(00000003) >>> SSL 2.0 [length 0032], CLIENT-HELLO 01 03 03 00 09 00 00 00 20 00 00 04 01 00 80 00 00 ff b0 c9 c2 3f 0b 0e 98 df b4 dc fe b7 e8 8f 17 9a 34 b5 9b 17 1b 2b ac 01 dc bd 2b a9 2d 18 44 0c 3071866604:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 52 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Using gnutls-cli: [root@localhost ~]# gnutls-cli 184.106.52.124 -p 5222 Resolving '184.106.52.124'... Connecting to '184.106.52.124:5222'... *** Fatal error: A TLS packet with unexpected length was received. *** Handshake has failed GNUTLS ERROR: A TLS packet with unexpected length was received. Connecting to the same server on port 5223 works fine. Using OpenSSL 1.0.1e-fips on CentOS 6.5 and OpenSSL 1.0.1f on Ubuntu 14.04.1 Any tips on how to troubleshoot this? Thanks in advance.

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Improving Performance of RDP Over LAN

    - by Jared Brown
    Architecture: A deployment of 6 new HP thin clients (Windows XP Embedded) with TCP/IP access to several new HP servers (Windows 2003 Server). Each thin client is connected over fiber optic to a Gigabit Cisco switch, which the servers are connected to. There are 10/100 Ethernet to fiber converter boxes on each end of the fiber cables. Problem: Noticeable lag over RDP while using the Unigraphics CAD package. 3D models take .5 to 1 second to respond to mouse actions. Other Details: Network throughput on each thin client's RDP session is 7288 kbps. RDP connection settings - color setting: 15k, all themes, etc. turned off. Local and remote system performance stats are well within norms (CPU, Memory, and Network). Question: Are there newer versions of terminal services or RDP I can use on my existing OSes? Are there compression algorithms, etc. that are well suited for a high-bandwidth LAN? Are there valid alternatives that will yield higher performance (i.e. UltraVNC with drivers installed)? Are there TCP/IP tuning options I can exploit?

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >