Search Results

Search found 11512 results on 461 pages for 'usb mass storage'.

Page 52/461 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • securing hard drive when users boot from usb external

    - by eshriek
    I supervise the use of a 'community' desktop computer. I would like to allow the use of the desktop via an external drive to a specific individual. How do I secure the internal hard drive so that no access is possible while using the external drive? Primarily I want to avoid accidental modification of the hard drive. The desktop runs Vista. The external is Ubuntu.

    Read the article

  • How to Completely delete my USB thumbstick?

    - by Roi
    I own a Kingston thumb-stick. I have installed a Google Chrome OS image to it so I could try it out. When it did so, it created 3 partitions (I backed everything and the software that installed Chrome OS delete my thumb-stick completely and created 3 partitions). Now I want to get it back to 1 partition but when I go to Disk Management, it will not let me "extend" the partition. I have tried different partitioning tools but they didn't recognize my thumb-stick anyway. I have also tried DiskPart to no avail. Is there any software that will completely format my thumb-stick so I would only have 1 partition?

    Read the article

  • usb flash drives how to fix ?

    - by madyotto
    Hi all I have recently purchased some so called 256gb Kingston drives that are fake I used a program called check flash and few other to check for dead sectors and read speed and read and write speed the results are as follows: they were reading at 1.35 mbps at best they read and wrote at 0.95 mbps and had a hell of a lot of dead sectors so what I ask is: Is there a way to drop the size to improve speed Is there any other way to improve speed How can I stop the device from addressing the dead sectors Any help will be much appreciated ALL THE THANKS (MADYOTTO)

    Read the article

  • Fixed or consistent mount location for a USB hard drive

    - by Journeyman Geek
    I store my music on an external hard drive, and play them with foobar2k. However, the drive letter changes, which usually means I need to rebuild a fairly large playlist every so often. I'm wondering if there's a way to reserve a drive letter for a specific external device (or type of device) by device ID or volume name, or if I'm better off using a NTFS mount point, and re-mounting the drive to a folder each time. I'm using either a Windows XP or 7 system, and the external drive is NTFS.

    Read the article

  • USB Permission - Write protection

    - by dekhadmai
    I have an external harddisk and my friends asked for it. The point is I don't trust in his anti-virus software. Is there anyway to allow some folders (I prepare hdd space for him) to write-able and all others is read-only ? or is there a software that can do like this ? And it would be great if I can have full access on my computer ONLY (may be with some specific software on my PC) and without having to modify anything. I don't ask for hdd-encryption since I only want to limit the area of write-able folder (and allow my friend to read through all my data), later I can scan for virus myself only in that area ... scanning entire hdd with 500gb/friend is not fun at all ! Sorry if this doesn't seems like the programming questions. Any help would be appreciate, Thank you.

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Implications of using many USB web cameras

    - by Martin
    I'm looking into connecting multiple low resolution USB webcams to a single computer. What implications might this have on performance? How does, for example, four 320x240 cameras fare against a single 640x480 camera? I'm not well versed in the architecture of the USB interface, what are the performance caveats? By performance I mean how would it affect the time to read the image data from multiple cameras compared to a single one.

    Read the article

  • Detecting USB drive insertion and removal using windows service and c#

    - by Kb
    Looking into possibility of making an USB distributed application that will autostart on insertion of an USB stick and shutdown when removing the stick Will use .Net and C#. Looking for suggestion how to approach this using C#? Update: Two possible solutions implementing this as a service. - override WndProc or - using WMI query with ManagementEventWatcher

    Read the article

  • Android USB Host Communication

    - by Kip Russell
    I'm working on a project that utilizes the USB Host capabilities in Android 3.2. I'm suffering from a deplorable lack of knowledge and talent regarding USB/Serial communication in general. I'm also unable to find any good example code for what I need to do. I need to read from a USB Communication Device. Ex: When I connect via Putty (on my PC) I enter: >GO And the device starts spewing out data for me. Pitch/Roll/Temp/Checksum. Ex: $R1.217P-0.986T26.3*60 $R1.217P-0.986T26.3*60 $R1.217P-0.987T26.3*61 $R1.217P-0.986T26.3*60 $R1.217P-0.985T26.3*63 I can send the initial 'GO' command from the Android device at which time I receive an echo of 'GO'. Then nothing else on any subsequent reads. How can I: 1) Send the 'go' command. 2) Read in the stream of data that results. The USB device I'm working with has the following interfaces (endpoints). Device Class: Communication Device (0x2) Interfaces: Interface #0 Class: Communication Device (0x2) Endpoint #0 Direction: Inbound (0x80) Type: Intrrupt (0x3) Poll Interval: 255 Max Packet Size: 32 Attributes: 000000011 Interface #1 Class: Communication Device Class (CDC) (0xa) Endpoint #0 Address: 129 Number: 1 Direction: Inbound (0x80) Type: Bulk (0x2) Poll Interval (0) Max Packet Size: 32 Attributes: 000000010 Endpoint #1 Address: 2 Number: 2 Direction: Outbound (0x0) Type: Bulk (0x2) Poll Interval (0) Max Packet Size: 32 Attributes: 000000010 I'm able to deal with permission, connect to the device, find the correct interface and assign the endpoints. I'm just having trouble figuring out which technique to use to send the initial command read the ensuing data. I'm tried different combinations of bulkTransfer and controlTransfer with no luck. Thanks. I'm using interface#1 as seen below: public AcmDevice(UsbDeviceConnection usbDeviceConnection, UsbInterface usbInterface) { Preconditions.checkState(usbDeviceConnection.claimInterface(usbInterface, true)); this.usbDeviceConnection = usbDeviceConnection; UsbEndpoint epOut = null; UsbEndpoint epIn = null; // look for our bulk endpoints for (int i = 0; i < usbInterface.getEndpointCount(); i++) { UsbEndpoint ep = usbInterface.getEndpoint(i); Log.d(TAG, "EP " + i + ": " + ep.getType()); if (ep.getType() == UsbConstants.USB_ENDPOINT_XFER_BULK) { if (ep.getDirection() == UsbConstants.USB_DIR_OUT) { epOut = ep; } else if (ep.getDirection() == UsbConstants.USB_DIR_IN) { epIn = ep; } } } if (epOut == null || epIn == null) { throw new IllegalArgumentException("Not all endpoints found."); } AcmReader acmReader = new AcmReader(usbDeviceConnection, epIn); AcmWriter acmWriter = new AcmWriter(usbDeviceConnection, epOut); reader = new BufferedReader(acmReader); writer = new BufferedWriter(acmWriter); }

    Read the article

  • Using USB PS2 hand controller in C#.NET

    - by rross
    I'm attempting to create a program that takes input from a usb ps2 hand controller, translate and passes the info to a rs232 device. I already have everything working for the rs232 device. The problem is interfacing with a usb controller. There doesn't seem to be any good documentation out there and on top of that .NET3.0/3.5 doesn't have any libs to help you out. How would one even get started?

    Read the article

  • CE 5.0 with 3G Usb Stick

    - by johncatfish
    I'm new to CE programming and I have a Marvel device PXA270 with Windows CE 5.0 installed. The device has one usb port. I wonder if there's ANYTHING I can try to connect a 3G-HDSPA usb stick to it. When plugged it only recognises its folders as a pendrive would do, but no Internet. Thx.

    Read the article

  • Receiving data from a USB device in C or C++

    - by Midas
    It starts to bore me that I always have to ask questions about trifles. I simply need a list of all plugged in USB devices and have the user select one to let the console application receive any data the USB device sends. I can then start playing around with the data in my program. I don't want to use library's, only standard C++ functions, and the program should work in Windows 98.

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • How do I 'see' an external USB drive connected directly to my Broadband Router?

    - by The Cougar Kid
    This is a very frustrating problem! I have a small home network with several dual boot Ubuntu / Windows computers. I have recently upgraded my Broadband connection and the new router permits the direct attachment of an external USB drive which can back up all of the household's computers. There are no problems when booted under Windows, and there were no problems with older versions of UBUNTU, but since upgrading to 11.10 I can no longer "see" the drive. I used to find it via Network / Windows Network / Home / name of Router, but under 11.10 the same method yields an error message Unable to mount location Failed to retrieve share list from server. Can anyone help please? Starting Nmap 5.21 ( http://nmap.org ) at 2011-12-21 10:06 GMT Stats: 0:02:02 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan Service scan Timing: About 50.00% done; ETC: 10:10 (0:01:56 remaining) Nmap scan report for 192.168.1.254 Host is up (0.0097s latency). Not shown: 998 filtered ports PORT STATE SERVICE VERSION 554/tcp open rtsp? 7070/tcp open realserver? Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 152.38 seconds sudo tail -n 30 /var/log/syslog [sudo] password for alaric: Dec 21 10:05:42 UPSTAIRS2U wpa_supplicant[882]: WPA: Group rekeying completed with 00:01:3b:8b:63:1a [GTK=TKIP]

    Read the article

  • Trying to find USB device on iphone with IOKit.framework

    - by HuGeek
    Hi all, i'm working on a project were i need the usb port to communicate with a external device. I have been looking for exemple on the net (Apple and /developer/IOKit/usb exemple) and trying some other but i can't even find the device. In my code i blocking at the place where the fucntion looks for a next iterator (pointer in fact) with the function getNextIterator but never returns a good value so the code is blocking. By the way i am using toolchain and added IOKit.framework in my project. All i what right now is the communicate or do like a ping to someone on the USB bus!! I blocking in the 'FindDevice'....i can't manage to enter in the while because the variable usbDevice is always = to 0....i have tested my code in a small mac program and it works... Thanks Here is my code : IOReturn ConfigureDevice(IOUSBDeviceInterface **dev) { UInt8 numConfig; IOReturn result; IOUSBConfigurationDescriptorPtr configDesc; //Get the number of configurations result = (*dev)->GetNumberOfConfigurations(dev, &numConfig); if (!numConfig) { return -1; } // Get the configuration descriptor result = (*dev)->GetConfigurationDescriptorPtr(dev, 0, &configDesc); if (result) { NSLog(@"Couldn't get configuration descriptior for index %d (err=%08x)\n", 0, result); return -1; } ifdef OSX_DEBUG NSLog(@"Number of Configurations: %d\n", numConfig); endif // Configure the device result = (*dev)->SetConfiguration(dev, configDesc->bConfigurationValue); if (result) { NSLog(@"Unable to set configuration to value %d (err=%08x)\n", 0, result); return -1; } return kIOReturnSuccess; } IOReturn FindInterfaces(IOUSBDeviceInterface *dev, IOUSBInterfaceInterface **itf) { IOReturn kr; IOUSBFindInterfaceRequest request; io_iterator_t iterator; io_service_t usbInterface; IOUSBInterfaceInterface **intf = NULL; IOCFPlugInInterface **plugInInterface = NULL; HRESULT res; SInt32 score; UInt8 intfClass; UInt8 intfSubClass; UInt8 intfNumEndpoints; int pipeRef; CFRunLoopSourceRef runLoopSource; NSLog(@"Debut FindInterfaces \n"); request.bInterfaceClass = kIOUSBFindInterfaceDontCare; request.bInterfaceSubClass = kIOUSBFindInterfaceDontCare; request.bInterfaceProtocol = kIOUSBFindInterfaceDontCare; request.bAlternateSetting = kIOUSBFindInterfaceDontCare; kr = (*dev)->CreateInterfaceIterator(dev, &request, &iterator); usbInterface = IOIteratorNext(iterator); IOObjectRelease(iterator); NSLog(@"Interface found.\n"); kr = IOCreatePlugInInterfaceForService(usbInterface, kIOUSBInterfaceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); kr = IOObjectRelease(usbInterface); // done with the usbInterface object now that I have the plugin if ((kIOReturnSuccess != kr) || !plugInInterface) { NSLog(@"unable to create a plugin (%08x)\n", kr); return -1; } // I have the interface plugin. I need the interface interface res = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBInterfaceInterfaceID), (LPVOID*) &intf); (*plugInInterface)->Release(plugInInterface); // done with this if (res || !intf) { NSLog(@"couldn't create an IOUSBInterfaceInterface (%08x)\n", (int) res); return -1; } // Now open the interface. This will cause the pipes to be instantiated that are // associated with the endpoints defined in the interface descriptor. kr = (*intf)->USBInterfaceOpen(intf); if (kIOReturnSuccess != kr) { NSLog(@"unable to open interface (%08x)\n", kr); (void) (*intf)->Release(intf); return -1; } kr = (*intf)->CreateInterfaceAsyncEventSource(intf, &runLoopSource); if (kIOReturnSuccess != kr) { NSLog(@"unable to create async event source (%08x)\n", kr); (void) (*intf)->USBInterfaceClose(intf); (void) (*intf)->Release(intf); return -1; } CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopDefaultMode); if (!intf) { NSLog(@"Interface is NULL!\n"); } else { *itf = intf; } NSLog(@"End of FindInterface \n \n"); return kr; } unsigned int FindDevice(void *refCon, io_iterator_t iterator) { kern_return_t kr; io_service_t usbDevice; IOCFPlugInInterface **plugInInterface = NULL; HRESULT result; SInt32 score; UInt16 vendor; UInt16 product; UInt16 release; unsigned int count = 0; NSLog(@"Searching Device....\n"); while (usbDevice = IOIteratorNext(iterator)) { // create intermediate plug-in NSLog(@"Found a device!\n"); kr = IOCreatePlugInInterfaceForService(usbDevice, kIOUSBDeviceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score); kr = IOObjectRelease(usbDevice); if ((kIOReturnSuccess != kr) || !plugInInterface) { NSLog(@"Unable to create a plug-in (%08x)\n", kr); continue; } // Now create the device interface result = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID), (LPVOID)&dev); // Don't need intermediate Plug-In Interface (*plugInInterface)->Release(plugInInterface); if (result || !dev) { NSLog(@"Couldn't create a device interface (%08x)\n", (int)result); continue; } // check these values for confirmation kr = (*dev)->GetDeviceVendor(dev, &vendor); kr = (*dev)->GetDeviceProduct(dev, &product); //kr = (*dev)->GetDeviceReleaseNumber(dev, &release); //if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID) || (release != LegoUSBRelease)) { if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID)) { NSLog(@"Found unwanted device (vendor = %d != %d, product = %d != %d, release = %d)\n", vendor, kUSBVendorID, product, LegoUSBProductID, release); (void) (*dev)-Release(dev); continue; } // Open the device to change its state kr = (*dev)->USBDeviceOpen(dev); if (kr == kIOReturnSuccess) { count++; } else { NSLog(@"Unable to open device: %08x\n", kr); (void) (*dev)->Release(dev); continue; } // Configure device kr = ConfigureDevice(dev); if (kr != kIOReturnSuccess) { NSLog(@"Unable to configure device: %08x\n", kr); (void) (*dev)->USBDeviceClose(dev); (void) (*dev)->Release(dev); continue; } break; } return count; } // USB rcx Init IOUSBInterfaceInterface** osx_usb_rcx_init (void) { CFMutableDictionaryRef matchingDict; kern_return_t result; IOUSBInterfaceInterface **intf = NULL; unsigned int device_count = 0; // Create master handler result = IOMasterPort(MACH_PORT_NULL, &gMasterPort); if (result || !gMasterPort) { NSLog(@"ERR: Couldn't create master I/O Kit port(%08x)\n", result); return NULL; } else { NSLog(@"Created Master Port.\n"); NSLog(@"Master port 0x:08X \n \n", gMasterPort); } // Set up the matching dictionary for class IOUSBDevice and its subclasses matchingDict = IOServiceMatching(kIOUSBDeviceClassName); if (!matchingDict) { NSLog(@"Couldn't create a USB matching dictionary \n"); mach_port_deallocate(mach_task_self(), gMasterPort); return NULL; } else { NSLog(@"USB matching dictionary : %08X \n", matchingDict); } CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBVendorID)); CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBProductID)); result = IOServiceGetMatchingServices(gMasterPort, matchingDict, &gRawAddedIter); matchingDict = 0; // this was consumed by the above call // Iterate over matching devices to access already present devices NSLog(@"RawAddedIter : 0x:%08X \n", &gRawAddedIter); device_count = FindDevice(NULL, gRawAddedIter); if (device_count == 1) { result = FindInterfaces(dev, &intf); if (kIOReturnSuccess != result) { NSLog(@"unable to find interfaces on device: %08x\n", result); (*dev)-USBDeviceClose(dev); (*dev)-Release(dev); return NULL; } // osx_usb_rcx_wakeup(intf); return intf; } else if (device_count 1) { NSLog(@"too many matching devices (%d) !\n", device_count); } else { NSLog(@"no matching devices found\n"); } return NULL; } int main(int argc, char *argv[]) { int returnCode; NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Debut du programme \n \n"); osx_usb_rcx_init(); NSLog(@"Fin du programme \n \n"); return 0; // returnCode = UIApplicationMain(argc, argv, @"Untitled1App", @"Untitled1App"); // [pool release]; // return returnCode; }

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >