Search Results

Search found 11084 results on 444 pages for 'storage media'.

Page 8/444 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • XNA Easy Storage XBOX 360 High Scores

    - by user1003211
    To followup from a previous query - I need some help with the implementation of easystorage high scores, which is bringing up some errors on the xbox. I get the prompt screen, a savedevice is selected and a file are all created! However the file remains empty, (I've tried prepopulating but still get errors). The full portions of the scoring code can be found here: http://pastebin.com/74v897Yt The current issue in particular is in LoadHighScores() - "There is an error in XML document (0, 0)." under line data = (HighScoreData)serializer.Deserialize(stream); I'm not sure whether this line is correct either: HighScoreData data = new HighScoreData(); public static HighScoreData LoadHighScores(string container, string filename) { HighScoreData data = new HighScoreData(); if (Global.SaveDevice.FileExists(container, filename)) { Global.SaveDevice.Load(container, filename, stream => { File.Open(Global.fileName_options, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream); } finally { // Close the file stream.Close(); // stream.Dispose(); } }); } return (data); } I call: PromptMe(); when the Start button is pressed at the beginning. I call: if (Global.SaveDevice.IsReady){entries = LoadHighScores(HighScoresContainer, HighScoresFilename);} during the menu screen to try and display the highscore screen. I call: SaveHighScore(); when game ends. I've tried altering the struct code to a class but still no luck. Any help greatly appreciated.

    Read the article

  • SIMD Extensions for the Database Storage Engine

    - by jchang
    For the last 15 years, Intel and AMD have been progressively adding special purpose extensions to their processor architectures. The extensions mostly pertain to vector operations with Single Instruction, Multiple Data (SIMD) concept. The reasoning was that achieving significant performance improvement over each successive generation for the general purpose elements had become extraordinarily difficult. On the other hand, SIMD performance could be significantly improved with special purpose registers...(read more)

    Read the article

  • Off-site Cardholder Data Storage

    - by LinuxGnut
    Is there a service or site out there that will store cardholder data for me? I don't need any kind of transaction processing or recurring billing... I just need somewhere that I can store data on until someone in my company is able to look at it. The specific need is allowing customers to input data that will be used for credit checks. Name, Address, Credit Card(s), and the such. Google Checkout, PayPal, NetSuite, and Authorize.net seem to be what everyone suggests to me, but they don't offer what I need -- they're just payment gateways.

    Read the article

  • iOS persistant storage with update function

    - by jernej
    im developing a game which has different levels and i need to store all levels and its elements (position, image, sounds,..) into a file/database. The levels will be updated so i need a function that checks online for a update and downloads a database dump and additional files. I was planing to store all the persistent data into a SQLLite database, but not quite sure how to do the update part - to pack the database dump and the files together (in a .zip or with a xml). Can this be done any other way (as secure as possible)? thanks!

    Read the article

  • update manager - insufficient storage space (false alarm)

    - by itsols
    I'm trying to run Update Manager but it keeps reporting that there's not enough space. Here's the screenshot: I ran sudo apt-get update && sudo apt-get upgrade from the terminal but still update manager says that there are updates and I cannot seem to get pass this message. I have even removed many programs from my system and there is supposed to be at least 6GB of disk space free. What can I do?

    Read the article

  • SIMD Extensions for the Database Storage Engine

    - by jchang
    For the last 15 years, Intel and AMD have been progressively adding special purpose extensions to their processor architectures. The extensions mostly pertain to vector operations with Single Instruction, Multiple Data (SIMD) concept. The motivation was that achieving significant performance improvement over each successive generation for the general purpose elements had become extraordinarily difficult. On the other hand, SIMD performance could be significantly improved with special purpose registers...(read more)

    Read the article

  • USB Storage Device Automount

    - by matto1990
    Under Ubuntu 10.04 one of the problems which appeared is that USB devices would no longer automatically mount when plugged in. Normally I would get a pop up message asking what application I wanted to open the newly plugged in device with, however now that doesn't happen. This happens regardless of the way the device is formatted (NTFS or FAT32) and all other USB devices (printer, keyboard and mouse) work perfectly. My current solution is the mount them manually using sudo mount dev/... /medai/... however to be honest I'm just getting tired of having to do this. I'm happy to post any extra information you are likely to need. I know there will be lots of places I could look to find out what's going wrong but I have no idea where to start really.

    Read the article

  • Windows Phone 7 development: Using isolated storage

    - by DigiMortal
    In my previous posting about Windows Phone 7 development I showed how to use WebBrowser control in Windows Phone 7. In this posting I make some other improvements to my blog reader application and I will show you how to use isolated storage to store information to phone. Why isolated storage? Isolated storage is place where your application can save its data and settings. The image on right (that I stole from MSDN library) shows you how application data store is organized. You have no other options to keep your files besides isolated storage because Windows Phone 7 does not allow you to save data directly to other file system locations. From MSDN: “Isolated storage enables managed applications to create and maintain local storage. The mobile architecture is similar to the Silverlight-based applications on Windows. All I/O operations are restricted to isolated storage and do not have direct access to the underlying operating system file system. Ultimately, this helps to provide security and prevents unauthorized access and data corruption.” Saving files from web to isolated storage I updated my RSS-reader so it reads RSS from web only if there in no local file with RSS. User can update RSS-file by clicking a button. Also file is created when application starts and there is no RSS-file. Why I am doing this? I want my application to be able to work also offline. As my code needs some more refactoring I provide it with some next postings about Windows Phone 7. If you want it sooner then please leave me a comment here. Here is the code for my RSS-downloader that downloads RSS-feed and saves it to isolated storage file calles rss.xml. public class RssDownloader {     private string _url;     private string _fileName;       public delegate void DownloadCompleteDelegate();     public event DownloadCompleteDelegate DownloadComplete;       public RssDownloader(string url, string fileName)     {         _url = url;         _fileName = fileName;     }       public void Download()     {         var request = (HttpWebRequest)WebRequest.Create(_url);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);            }       private void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           using(var stream = response.GetResponseStream())         using(var reader = new StreamReader(stream))         using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication())         using(var file = appStorage.OpenFile("rss.xml", FileMode.OpenOrCreate))         using(var writer = new StreamWriter(file))         {             writer.Write(reader.ReadToEnd());         }           if (DownloadComplete != null)             DownloadComplete();     } } Of course I modified RSS-source for my application to use rss.xml file from isolated storage. As isolated storage files also base on streams we can use them everywhere where streams are expected. Reading isolated storage files As isolated storage files are opened as streams you can read them like usual files in your usual applications. The next code fragment shows you how to open file from isolated storage and how to read it using XmlReader. Previously I used response stream in same place. using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication()) using(var file = appStorage.OpenFile("rss.xml", FileMode.Open)) {     var reader = XmlReader.Create(file);                      // more code } As you can see there is nothing complex. If you have worked with System.IO namespace objects then you will find isolated storage classes and methods to be very similar to these. Also mention that application storage and isolated storage files must be disposed after you are not using them anymore.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Can I use Ubuntu as a wireless media server which performs all decoding/processing server-side?

    - by AthloX
    I want to setup UBUNTU 12.04 desktop as Home media server. I have window 7 netbook and UBUNTU 12.04lts laptop even a samsun galaxy note tablet (android). Two desktop in other room with dualboot win7 and ubuntu. SHARP AQUOS Plasma Tv with Wi-Fi connected. I want to install ubuntu as media server to stream audio/video files over wi-fi. Not only this i want this media server to use its own processing power to decode ans stream so that on remote end only file can play without using their own resource. Is it possible to use ubuntu as media server to stream files without making the remote end to use there own resource. I want only bandwidth of Wi-Fi to be use in this and media center hardware resource.Remote end gadget should use only speaker and screen and not processing power of their own. Please any suggestion is it possible to do so ?

    Read the article

  • Oracle Database Machine and Exadata Storage Server

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Oracle Database Machine and Exadata Storage Server (Doc ID 1187674.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Exadata and Oracle Database Machine environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: • Database Machine and Exadata Storage Server Concepts and Overview• Database Machine and Exadata Storage Server Configuration and Administration• Database Machine and Exadata Storage Server Troubleshooting and Debugging• Database Machine and Exadata Storage Server Best Practices• Database Machine and Exadata Storage Server Patching• Database Machine and Exadata Storage Server Documentation and References• Database Machine and Exadata Storage Server Known Problems• ASM and RAC Documentation• Using My Oracle Support Effectively

    Read the article

  • Azure Storage Explorer

    - by kaleidoscope
    Azure Storage Explorer –  an another way to Deploy the services on Cloud Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications. All three types of cloud storage can be viewed: blobs, queues, and tables. You can also create or delete blob/queue/table containers and items. Text blobs can be edited and all data types can be imported/exported between the cloud and local files. Table records can be imported/exported between the cloud and spreadsheet CSV files. Why Azure Storage Explorer Azure Storage Explorer is a licensed CodePlex project provided by Neudesic – a Microsoft partner.  It is a simple UI that requires you to input your blob storage name, access key and endpoints in the Storage Settings dialog. For more details please refer to the link: http://azurestorageexplorer.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=35189   Anish, S

    Read the article

  • Why is business-class storage so expensive?

    - by Mark Henderson
    This is a Canonical Question about the Cost of Enterprise Storage. See also the following question: What's the best way to explain storage issues to developers and other users Regarding general questions like: Why do I have to pay 50 bucks a month per extra gigabyte of storage? Our file server is always running out of space, why doesn't our sysadmin just throw an extra 1TB drive in there? Why is SAN equipment so expensive? The Answers here will attempt to provide a better understanding of how enterprise-level storage works and what influences the price. If you can expand on the Question or provide insight as to the Answer, please post.

    Read the article

  • cannot load windows media within firefox browser

    - by pstanton
    Hi all, I recently re-installed windows and everything on it and besides this one issue everything is working much smoother again... the only problem is that I cannot load windows media content (for example *.asx) within my firefox browser. If i go to a page which embeds such streaming media, the media player never loads, and if i type the full url to the asx in the address bar, nothing loads. I have "Windows Media Player Plug-in Dynamic Link Library 3.0.2.629" in my plugin list. what might i have broken? cheers, p.

    Read the article

  • TP-Link storage server accessing from Mac OS

    - by coure2011
    I have a storage/print server by TP-Link http://www.tp-link.com/en/products/details/?categoryid=232&model=TL-PS310U I just connect usb-storage to the print server and on windows I can access that usb-device from my windows computers. But how can I access that device from Mac? I found an article that I can add usb printer to mac using that device but not able to find how to access the storage device. please help me out!

    Read the article

  • Windows API BackupRead failing with error 50 on Windows Storage Server 2008 R2

    - by Jason F
    We have a backup application that uses the Windows API BackupRead. It works correctly on Windows Server 2003, 2008, 2008 R2. It does not work on Storage Server 2008 R2. It always fails with error 50 - The request is not supported. The documentation for BackupRead gives no indication that it will not work with Storage Server 2008 R2. Anyone else have any experience using this API on Storage Server 2008 R2?

    Read the article

  • Agile social media analysis and implementation

    - by blunders
    Are there any books/platforms for social media campaign planning and implementation that define a completely agile approach to engaging audiences on platforms such as Facebook, Linkedin, Twitter, etc? UPDATE: Posted a bounty on the question since the current answer is really not about agile approaches to social media campaign planning and implementation. UPDATE 2: The question is asking for an agile social media approach, or a social media platform that has agile social media approach baked-in. If the question was about an agile approach to software development, SCRUM would be the most likely answer (70% percent of agile software developers say they practice some from of SCRUM), and Pivotal Tracker might be one of many agile platforms suggested; as a generalization Pivotal Tracker might be called a project management platform. On the flip-side, suggesting just a social media platform might be the equivalent of suggesting a project management platform, and suggesting I see if SCRUM works on it. Problem is that if you haven't suggested an agile social media approach to try on this social media platform, then you haven't provided an answer to the question.

    Read the article

  • Convert windows media video stream to mp3 audio?

    - by Jared
    I'd like to take a windows media video stream and convert it to mp3 audio. Using mencoder I can convert from windows media video to avi but not directly from windows media to mp3. The following is the command I use to convert to avi, but I'd like to avoid having to do two conversions and converting the avi file to mp3. mencoder mms://wmslive.media.hinet.net/Weblive_Bloomberg_600 -ovc lavc -oac mp3lame -o output.avi Note if there are tools that can do this I will use them, I have access to both Linux and Windows.

    Read the article

  • The Best Articles for Playing, Customizing, and Organizing Your Media

    - by Lori Kaufman
    Computers today are used for much more than generating documents, writing and receiving email, and surfing the web. We also use them to listen to music, watch movies and TV shows, and to transfer media to and from mobile devices. Below are links to many articles we have published on various media topics, such as streaming media, managing and organizing your media, converting media formats, obtaining album art, preparing media for transfer to mobile devices, and some general information about working with audio and video. You’ll also find links to articles about specific media tools, such as Audacity, XBMC, Windows Media Player, VLC, and iTunes. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Windows 7 - media sharing extremely flakey (XBox 360)

    - by Nathan Ridley
    I have constant headaches with Windows Media Library sharing, which I'm using to share my video library (a whole lot of AVI files on my computer's hard drive) with the XBox 360. I have the whole setup working, so setup and basic configuration is not an issue. The issue is that Windows Media Library frequently fails to notice additions and modifications to the "watched" folders. Not only that, but often after I've added something new to the library, it fails to transmit this new library information to the 360, as is demonstrated by the fact that, using that 360, I navigate to my video library and the new folders don't show up in the list. Often the only way to get the 360 to see the changes is to exit out of "Videos" on the 360, then remove all videos from Windows Media Library, go back into videos on the 360 to confirm that nothing is shared, then re-add a few videos (but not too many) in Windows Media Library, then go back into the videos in the 360. The whole thing is a real pain in the bum and I really wish I could just add all my videos to Windows Media Library and have it keep watch without me having to kick it, and have the 360 pick up changes without me having to constantly rebuild the library. Any ideas?

    Read the article

  • Media center consumes all available memory when attempting to play music off of a server

    - by RCIX
    I have Windows 7 Ultimate, and recently, when i try to play a song off of my Twonky Media Server/Windows Media Connect (based on an HP WHS with an Atom), it plays choppily. When i open Resource Monitor, it shows that after ordering the music to play, memory usage rapidly spikes to consume most, if not all, of the available memory on my system (excluding a couple hundred megabytes in standby). Why does it do this and is there anything i can do to stop it? Edit: it happens when I attempt to browse the server's music, not just when i play music. Edit 2: the "ehshell" process is what consumes the memory, appears to me something specific to media center. Moreover, the ehshell process doesn't die in this case. Edit 3: It only happens when browsing my Twonky library, and not my Windows Media Connect.

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Server 2012 Storage Pools, Raid Controller... can the Storage Pool deal with it?

    - by TomTom
    Before trying it out - I don't find any documentation. Given that Storage Pools have serious performance problems with parity, and do not rebalance data at the moment when you add discs, my preferred way to use them would be as think provisioned space, ISCSI targets - with every "Pool" running against 1 RAID that comes from a Raid controller (who also introduces SSD read and write caching - another thing missing from Storage Pools). The main question is - how does a Storage Pool handle the change in the underlying disc that can happen? I mostly talk about OCE (Online Capacity Expansion), where a disc after an expansion suddenly reports a larger space. Standard Windows allows you to use this additional space (and expand the partitions). How does a storage pool handle it?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >