Search Results

Search found 6317 results on 253 pages for 'persistent storage'.

Page 62/253 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Intel Matrix Storage Manager not showing on Asus P5W DH Deluxe?

    - by Leon
    I have set, under "Main" - "IDE Configuration" - "Configure SATA as" to "Raid" and "Onboard Serial-ATA BootRom" to "Enabled", but upon POST I still do not see the Intel Matrix Storage Manager screen where I can press Ctrl+I to set up my raid? I have the latest BIOS version EDIT: Although I had set the rom to "enabled", I then went and reset the bios settings to default. I then set the bootrom setting to "disabled", restarted and then "enabled" again and it seemed to work.

    Read the article

  • How to find out which app locks a given USB storage device?

    - by Dimitri C.
    I'd like to unplug my USB mass storage device, but I get the following error message from Windows: Windows can't stop your 'XYZ' device because it is in use. Close any programs or windows that might be using the device, and then try again later. How can I find out which application locks the device, so I can safely unplug it without having to close all running applications first? Note: I'm working on a Windows Vista system.

    Read the article

  • migrate old bootable wd harddrive to new 500 gb hd which was just used for storage and has stored programs on it , id like 500gb to be botable

    - by tom
    id appreciate any help in this matter this is in laptop with 2 harddrive bays. i have a wd scorpio that has become 2 small its bootable with windows 7, i have pretty new 3 months old hitachi hd500gb i was using for storage-backup of programs it is not bootable and id like to make it that way without losing saved backed up info on it is there any way to do this, id like to keep only 500gb bootable hard drive due to weight of laptop its 10.lbs desktop replacement gateway p-7805u thanks for your help in advance on this matter, tommy

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • Anyone had any issues getting a disk to start on a Walrus storage sytem?

    - by Peter NUnn
    Hi folks, I'm trying to get a Eucalyptus system up and running and have managed to get the cloud controller and node controller running fine, with an instance running in the cloud system, but without any persistent storage. When I try and create a volume I get euca-create-volume -s 10 -z cluster1 VOLUME vol-5F5D0659 10 creating 2010-05-31T09:10:11.408Z but when I try and see the volume I get euca-describe-volumes VOLUME vol-5F5D0659 10 cluster1 failed 2010-05-31T09:10:11.408Z VOLUME vol-5FE9065E 10 cluster1 failed 2010-05-31T09:02:56.721Z I've dug all over the place, but can't seem to turn up a reason the creation would fail or where to start looking to see what the issue might be. Anyone have any ideas where to even start looking for the answer to this? Ta Peter.

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • What's hot in Oracle Premier Support News for Solaris, Storage and Systems - How to Patch!

    - by user12244613
    Struggling with locating patches for Sun products? Can't find your Oracle System Drivers? This question has been raised many times by customers and was the source of a short video in the Oracle System Support Newsletter in February 2012. The transition between SunSolve and My Oracle Support is to change how you think about the type of patch your looking for. For example, in SunSolve you might have typed e1000g if looking for an Enternet Driver.. but entering e1000g will not find anything in My Oracle Support - Patches and Update Menu. As you need to use the Product (Advanced) search which is driven of the Product Name, therefore you need to type "Ethernet" and select the ethernet product you are looking for to locate the patches for this product. Just to recap that video: If you are looking for the e1000g Ethernet Driver - You need to use Advance Search and search for Enternet 1. Log into My Oracle Support - Select Patches and Updates - Select Product or Family (Advanced Search). 2. In the product line enter: Ethernet and select the product name from the menu. 3. Check remove supersede patches - that ensure you only get relevant current patches in the results. 4. Select Search and the results are displayed. Now you have more options to include the platform (Solaris,Linux etc.) if want to further narrow the search. Need more information? Log into My Oracle Support and what a short 90sec video I put together. View the 7 minute Video using Firefox/chrome – It shows searching for individual patches, Solaris, Firmware etc. If you are not receiving the Oracle System Support Newsletter: Option (a) Within My Oracle Support, make document id: 1363390.1 a favourite and revisit it on the 2nd of each month for the latest content. Option (b) By default the Newsletter  is sent to all customers who have logged a Service Request on an Oracle Systems Hardware Product during the last 12months, unless you have opted out to receiving Oracle Communications on your profile on http://oracle.com

    Read the article

  • What is the most reliable session storage in PHP: Memcache, database or files?

    - by user1179459
    What is the best and most safest way to handle PHP sessions. Is the best way to store sessions in: Database (more reliable, but high bottleneck, slow speed, not good for high database usage websites)? Memcache (super fast, but distributed more security problems, chances of loosing data when the server restarted and chances of loosing data when the cache is full)? Files (default option, I guess slow since it reads and writes from file I/O, less security, etc). Which method is the best? What are the problems and good things of each of those approaches?

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Any suggestions to improve my PDO connection class?

    - by Scarface
    Hey guys I am pretty new to pdo so I basically just put together a simple connection class using information out of the introductory book I was reading but is this connection efficient? If anyone has any informative suggestions, I would really appreciate it. class PDOConnectionFactory{ public $con = null; // swich database? public $dbType = "mysql"; // connection parameters public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } }

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Oracle Virtual Networking Partner Sales Playbook Now Available

    - by Cinzia Mascanzoni
    Oracle Virtual Networking Partner Sales Playbook now available to partners registered in OPN Server and Storage Systems Knowledge Zones. Equips you to sell, identify and qualify opportunities, pursue specific sales plays, and deliver competitive differentiation. Find out where you should plan to focus your resources, and how to broaden your offerings by leveraging the OPN Specialized enablement available to your organization. Playbook is accessible to member partners through the following Knowledge Zones: Sun x86 Servers, Sun Blade Servers, SPARC T-Series Servers, SPARC Enterprise High-End M-Series Servers, SPARC Enterprise Entry-Level and Midrange M-Series Servers, Oracle Desktop Virtualization, NAS Storage, SAN Storage, Sun Flash Storage, StorageTek Tape Storage.

    Read the article

  • New DataCenter Options for Windows Azure

    - by ScottKlein
    Effective immediately, new compute and storage resource options are now available when selecting data center options in the Windows Azure Portal. "West US" and "East US" options are now available, for Compute and Storage. SQL Azure options for these two data centers will be available in the next few months. The official announcement can be found here.In terms of geo-replication:US East and West are paired together for Windows Azure Storage geo-replicationUS North and South are paired together for Windows Azure Storage geo-replicationThese two new data centers are now visible in the Windows Azure Management Portal effective immediately. Compute and Storage pricing remains the same across all data centers. Get started with Windows Azure through the free 90 day trial.

    Read the article

  • Getting Windows Azure SDK 1.1 To Talk To A Local DB

    - by Richard Jones
    Just found this, if you’re using Azure 1.1,  which you probably will be if yo'u’ve moved to Visual Studio 2010. To change the default database to something other than sqlexpress for Development Storage do this - Look at this - http://msdn.microsoft.com/en-us/library/dd203058.aspx At the bottom it states -   Using Development Storage with SQL Server Express 2008 By default the local Windows Group BUILTIN\Administrator is not included in the SQL Server sysadmin server role on new SQL Server Express 2008 installations.  Add yourself to the sysadmin role in order to use the Development Storage Services on SQL Server Express 2008.  See SQL Server 2008 Security Changes for more information. Changing the SQL Server instance used by Development Storage By default, the Development Storage will use the SQL Express instance.  This can be changed by calling “DSInit.exe /sqlinstance:<SQL Server instance>” from the Windows Azure SDK command prompt.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • html5 offline storage for Windows Mobile 6.1 or an alternative?

    - by SimonNet
    I understand that there are no browsers currently which support offline storage for mobile 6.1. I am trying to find a web form based solution avoiding the loss of data when my device has no connectivity. Have ruled out Gears and would like to avoid a win forms application as the forms change so often. Are there any other approaches that I should look at which are viable in C#? Are there any estimated dates for when we might see a browser for mobile 6.1 which can offer offline storage? Thanks

    Read the article

  • What's the simplest configuration of SVN on a Windows Server to avoid plain text password storage?

    - by detly
    I have an SVN 1.6 server running on a Windows Server 2003 machine, served via CollabNet's svnserve running as a service (using the svn protocol). I would like to avoid storing passwords in plain text on the server. Unfortunately, the default configuration and SASL with DIGEST-MD5 both require plain text password storage. What is the simplest possible way to avoid storing passwords in plain text? My constraints are: Path-based access control to the SVN repository needs to be possible (currently I can use an authz file). As far as I know, this is more-or-less independent of the authentication method. Active directory is available, but it's not just domain-connected windows machines that need to authenticate: workgroup PCs, Linux PCs and software that uses PySVN to perform SVN operations all need to be able to access the repositories. Upgrading the SVN server is feasible, as is installing additional software.

    Read the article

  • Is to possible to achieve the SATA II's 3Gbps (375MBps) between home network storage and home computers?

    - by techaddict
    Gigabit ethernet only has 1Gbps (125MBps), whereas SATA II has up to three times that rate. Is it possible to achieve the rate of three times Gigabit ethernet connection which is SATA II speeds, between home network storage and the laptops and desktops with SATA II hard drives? If so, how? Or, is the limitation of a gigabit ethernet port on the laptops the limitng factor, making 1000Gbps the fastest practical transfer speed possible? (practical, meaning that without taking apart the laptop and doing physical modifications like branching a SATA II transfer cable, etc.) -- I just realized -- wouldn't a USB 3.0 cable do the trick? Since USB 3.0 can reach up to 675MBps?

    Read the article

  • Are there any data remanence issues with flash storage devices?

    - by matt
    I am under the impression that, unlike magnetic storage, once data has been deleted from a flash drive it is gone for good but I'm looking to confirm this. This is actually relating to my smart phone, not my computer, but I figured it would be the same for any flash type memory. Basically, I have done a "Factory Reset" on the phone, which wipes the Flash ROM clean but I'm wondering is it really clean or is the next person that has my phone, if they are savvy enough going to be able to get all my passwords and what not? And yes, I am wearing my tinfoil hat so the CIA satellites can't read my thoughts, so I'm covered there.

    Read the article

  • How to install the TRIM RAID update for the Intel storage controller?

    - by Mike Pateras
    I just found this article, that says that Intel now supports TRIM for SSDs when the Intel storage controller is in RAID mode. It links to this download page. I'm pretty excited about that, but I'm a little confused. There seem to be two sets of drivers, an executable and something that's bootable. I ran the executable. Is that just to apply the drivers to my system now, and are the bootable drivers so that if I re-format, I won't have to re-run everything? Do I need to do both? And is there a way to check if it worked? I'm running an i7 in Windows 7 (ASUS P6T Deluxe Motherboard) with RAID 0, if that's significant.

    Read the article

  • What parameters to mdadm, to re-create md device with payload starting at 0x22000 position on backing storage?

    - by Adam Ryczkowski
    I try to recover from mdadm raid disaster, which happened when moving from ubuntu server 10.04 to 12.04. I know the correct order of devices from dmesg log, but given this information, I still cannot access the data. The superblocks look messy; the mdadm --examine for each disk is on this question on askubuntu By inspecting the raw contents of backing storage, I found the beginning of my data (the LUKS container in my case) at position 0x22000 relative to the beginning of the first partition in the raid. Question: What is the combination of options issued to "mdadm --create" to re-create mdadm that starts with the given offset? Bitmap size? PS. The relevant information from syslog when the system was healthy are pasted here.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >