Search Results

Search found 11084 results on 444 pages for 'storage media'.

Page 19/444 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • Putting altered social media logo icons on my website, can I get sued?

    - by Håkan Bylund
    I would say most websites with a somewhat thought-through graphical design use social media icons (i.e twitter, facebook, youtube, et.c) which are altered to fit the theme and design of the site. Now, my boss insist we only use the ones provided by say facebook or twitter themselfes (in fear of getting sued or lose credability), but sometimes it just doesnt look very good on the site. What is the common practice for these things? What do you risk by using an altered logo? What should I tell my boss? I'll provide a few examples, what'd happen if I put any of these on a site?

    Read the article

  • How do I prevent directories mounted with 'bind' from appearing on 'Devices' on nautilus?

    - by Can
    I have these lines in the fstab # binds /media/DataNtfs/Music /home/can/Music none rw,bind /media/DataNtfs/Pictures /home/can/Pictures none rw,bind /media/DataNtfs/Downloads /home/can/Downloads none rw,bind /media/DataNtfs/Documents /home/can/Documents none rw,bind /media/DataNtfs/Backups /home/can/Backups none rw,bind /media/DataNtfs/Notes /home/can/Notes none rw,bind /media/DataNtfs/Other /home/can/Other none rw,bind /media/DataNtfs/Packages /home/can/Packages none rw,bind /media/DataNtfs/Photos /home/can/Photos none rw,bind /media/DataNtfs/Videos /home/can/Videos none rw,bind /media/DataNtfs/WorkSpace /home/can/WorkSpace none rw,bind

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Using Javascript to call the Azure Blob Storage REST API

    - by user350829
    I'm developing a Flash app that saves files to the Azure Blob Storage API. I've learned that you should use the REST API directly rather than a go-between WCF service as this is the most efficient (using a web role is a bottleneck). The problem is that Flash can't do PUT or DELETE methods over Http and has to use external Javascript. This is not an area that I'm familiar with and need some advice/links to examples of using Javascript to work with the Storage API (I've obviously Googled this to no avail). Is this even possible? The Javascript would be hosted in a web role on the same domain. Many thanks, Ed

    Read the article

  • How to control the volume of a media player using jquery / javascript

    - by Geetha
    Hi All, I am using the following code for media player in asp.net page. Code: <object id="mediaPlayer" classid="clsid:22D6F312-B0F6-11D0-94AB-0080C74C7E95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" height="1" standby="Loading Microsoft Windows Media Player components..." type="application/x-oleobject" width="1"> <param name="fileName" value="" /> <param name="animationatStart" value="true" /> <param name="transparentatStart" value="true" /> <param name="autoStart" value="true" /> <param name="showControls" value="true" /> <param name="volume" value="70" /> </object> Problem: 1. How to set the default value for volume. [I have set it as 70 but showing 40] 2. If volume is zero the system volume also be zero.

    Read the article

  • Azure storage: Uploaded files with size zero bytes

    - by Fabio Milheiro
    When I upload an image file to a blob, the image is uploaded apparently successfully (no errors). When I go to cloud storage studio, the file is there, but with a size of 0 (zero) bytes. The following is the code that I am using: // These two methods belong to the ContentService class used to upload // files in the storage. public void SetContent(HttpPostedFileBase file, string filename, bool overwrite) { CloudBlobContainer blobContainer = GetContainer(); var blob = blobContainer.GetBlobReference(filename); if (file != null) { blob.Properties.ContentType = file.ContentType; blob.UploadFromStream(file.InputStream); } else { blob.Properties.ContentType = "application/octet-stream"; blob.UploadByteArray(new byte[1]); } } public string UploadFile(HttpPostedFileBase file, string uploadPath) { if (file.ContentLength == 0) { return null; } string filename; int indexBar = file.FileName.LastIndexOf('\\'); if (indexBar > -1) { filename = DateTime.UtcNow.Ticks + file.FileName.Substring(indexBar + 1); } else { filename = DateTime.UtcNow.Ticks + file.FileName; } ContentService.Instance.SetContent(file, Helper.CombinePath(uploadPath, filename), true); return filename; } // The above code is called by this code. HttpPostedFileBase newFile = Request.Files["newFile"] as HttpPostedFileBase; ContentService service = new ContentService(); blog.Image = service.UploadFile(newFile, string.Format("{0}{1}", Constants.Paths.BlogImages, blog.RowKey)); Before the image file is uploaded to the storage, the Property InputStream from the HttpPostedFileBase appears to be fine (the size of the of image corresponds to what is expected! And no exceptions are thrown). And the really strange thing is that this works perfectly in other cases (uploading Power Points or even other images from the Worker role). The code that calls the SetContent method seems to be exactly the same and file seems to be correct since a new file with zero bytes is created at the correct location. Does any one have any suggestion please? I debugged this code dozens of times and I cannot see the problem. Any suggestions are welcome! Thanks

    Read the article

  • Silverlight isolated storage: what identifies an "application"?

    - by Joe White
    The docs for Silverlight's IsolatedStorageFile.GetUserStoreForApplication just say that the isolated storage is specific to the "application", and that each different application will have its own storage independent of all other "applications" (but with one quota for the entire domain). That's great, but I haven't found anything yet that explains just what "application" is supposed to mean (either in the Silverlight docs or the regular .NET Framework docs). What information does Silverlight, in particular, use to decide that "this is application A" and "this is application B"? Does it just go off the URI to the .xap file, or what?

    Read the article

  • Best Design pattern for social media file transfer

    - by Onema
    Our system would like our clients to link their accounts with different social media sites like youtube, vimeo, facebook, myspace and so on. One of the benefits we would like to give to the user is to transfer, update and delete files they have uploaded to our sites and transfer them to the social media sites mentioned above. this files could be videos, images or audio. We started thinking about using a strategy pattern, as all of these sites share a common process ( authentication, connection, use the API to transfer/edit/delete the file ), but we soon realized that it may not work as me may want to use some of the extended functionality that is specific to each service (eg: associate a youtube video with a channel, or upload images to a specific album on facebook, and much, much more...) My question is, what would be the best Structural Design Patter to use for this scenario?

    Read the article

  • Python: wxpython wx.media.MediaCtrl - millisecond seek capability

    - by PPTim
    I've been searching for a media player that can display sub-second resolution in videos. Some pointed me to the Frame stepping functionality in MPC, but I'd like even more than that. I know from previous experience with wxPython that the wx.media.MediaCtrl both displays and (as fast as i can click with the mouse anyway) stops the video with millisecond-precision. The code is here, and runs no-problem with python +wxpython module. Has anyone come across other video players that handle this functionality, or has seen a more robust/developed video player written with wxPython that allows for this level of precision? This is possibily a one-off task so I'd like to use existing solutions if possible. Thanks.

    Read the article

  • Java portable media detection

    - by quosoo
    I would like to write a piece of java code that synchronizes files between local hard drive and a usb storage. I would like to have a different synchronization configuration depending on which usb storage is plugged in and I would like to have apropriate configuration to be selected automatically rather than chosen by the user. I've just read the JSR-80 and jUSB documentation as well as a bunch of articles and SO posts, but all of those are very old and it seems that since that time (around 2005) all the efforts have been abandoned especially for Windows platform, while the OS-independence is quite important to me (at least Windows and Linux need to be supported). Do I really need to use any of the USB APIs to recognize external drives that are connected to the system? I need something that is more unique than file path, drive letter or drive label... And if yes which one would you recommend (unless I missed something jUnit is actually the only for which Windows support exists).

    Read the article

  • Access block level storage via kernel

    - by N 1.1
    How to access block level storage via the kernel (w/o using scsi libraries)? My intent is to implement a block level storage protocol over network for learning purpose, almost the same way SCSI works. Requests will be generated by initiator and sent to target (both userspace program) which makes call to kernel module and returns the data using TCP protocol to initiator. So far, I have managed to build a simple "Hello" module and run it (I am new at kernel programming), but unable to proceed with block access. After searching a lot, I found struct buffer_head * bread(int dev,int block) in linux/fs.h, but the compiler throws error. error: implicit declaration of function ‘bread’ Please help, also feel free to advice on starting with kernel programming. Thank you!

    Read the article

  • Storage for large gridded datasets

    - by nullglob
    I am looking for a good storage format for large, gridded datasets. The application is meteorology, and we would prefer a format that is common within this field (to help exchange data with others). I don't need to deal with special data structures, and there should be a Fortran API. I am currently considering HDF5, GRIB2 and NetCDF4. How do these formats compare in terms of data compression? What are their main limitations? How steep is the learning curve? Are there any other storage formats worth investigating? I have not found a great deal of material outlining the differences and pros/cons of these formats (there is one relevant SO thread, and a presentation comparing GRIB and NetCDF).

    Read the article

  • Controlling Windows Media Player through PHP / batch files

    - by Duroth
    I'm currently writing a tiny webapp for my HTPC (actually, a PC serving as both a media player and web- / fileserver) that will allow me to remotely control the playing of audio, without having to turn on my TV just to switch songs. I'm using Windows Media Player as my audio player of choice, and I thought I could control it through PHP's COM Class. Unfortunately, I've not been able to find any documentation or examples on controlling WMP through this interface. Can anyone point me in the right direction here? A second (and much less preferable) solution would be to use PHP's exec() call to start batch files that, in turn, control WMP.

    Read the article

  • Different ways to use browser and system media resources

    - by utype
    Examples: <img src="system://media/icons/logo.png" alt="OS"> <style> img.browserIcon {background-image: url(browser://media/icons/logo.png); width: 16px; height: 16px;} </style> On Firefox you can access to some resources like this: <style> .button {background: transparent url(chrome://global/skin/button/startcap.png) no-repeat scroll left top} </style> Is it possible to access to rasterized font data or system sounds? Where can I get transparent 1px gif?

    Read the article

  • mobile device - different background - media query not working

    - by AMC
    Live site. This is my first attempt at utilizing a different CSS for mobile devices vs. regular screens. To do this, I'm using- @media only screen and (min-device-width : 320px) and (max-device-width : 480px) { background: url('img/background.jpg') no-repeat center center fixed; background-size: cover; height: 100%; } However, it doesn't seem to be working (I can only test on iPhones). Any ideas as to why that may be? I've also tried @media all to no avail.

    Read the article

  • Are there any home/soho NAS devices that will backup/sync to the cloud?

    - by 3rdparty
    Looking for a home office (SOHO) market (priced) network hard drive (NAS) that will sync some or all of its content to a cloud-based backup service. The only option I've been able to find so far is NetGear's [ReadyNAS Vault][1] however from what I've read it's not as secure as it could be, and the service is quite expensive ($200/yr for 50GB of cloud storage) - it's 'powered' by ElephantDrive Ideally would love to see something like Wuala integrated into a Lacie Network HDD - conveniently, I suspect this is in the works as Lacie recently acquired Wuala, however nothing has come of it yet. I know there are options to use rsync with a customizable NAS (such as the very versatile and hackable D-Link DNS-323, but the easier this is to setup and maintain, the better. Thanks! ps. I had many links posted within this question, but was limited to posting with only one due to anti-spam restrictions - gotta get my 'reputation' higher!

    Read the article

  • MySQL unstable behavior with external stroge

    - by onurozcelik
    In our project we are using MySQL 5.0.90(InnoDB engine) server with an external storage. We store MySQL data files in an external storage. When the external storage down for a reason we have unstable behaviours. So we made some tests. In Windows Server 2008 We closed external storage physically. MySQL service stoped and we could not reach the server. Then we opened the storage unit and we could not start service until we reconfigured MySQL. We made storage unit offline from operating system. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online and we could start the service(not automatically). In Windows Server 2003 We made storage unit offline. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online we could not start the service until we reinstalled MySQL. Before reinstall we tried to reconfigure but it did not work. We closed external storage physically. MySQL service stoped and we could not reach the server. After that we opened the storage unit and we could start service(not automatically) We expect service to autostart after storage unit is online/open. But these tests show unstable behaviors. Is there any solutions to this.

    Read the article

  • TV video constantly skips 1/2 second, plays 1 second; green on bottom

    - by Robert
    I just got DirecTV. It worked for a day, but now the TV video constantly skips 1/2 second, then plays 1 second. Also, the bottom 5th of the screen is solid green. The audio does not skip. I tried to do "Set up TV signal" (in Media Center) - but I get an error. See the post I just made here titled "Error - “IR Hardware not detected” - but it’s installed/working." Thanks for your help.

    Read the article

  • Windows 7 MCE client server HTPC

    - by Dan Hook
    My HTPC is downstairs connected to my large screen. I would like to use my desktop upstairs to record over the air HDTV and stream it to the HTPC. I have Windows 7 Professional installed on the desktop. I currently have XP on the HTPC, but I'm going to upgrade it to Windows 7. Is there a particular flavor of Win 7 I should use? Is it possible to record on the desktop and use Windows MCE on the HTPC to watch the recorded content? What about live TV? The consensus I've seen is that Windows 7 cannot be configured as a Windows Media Center Extender. Is that the case? If so, what's the cheapest solution for an extender?

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >