Search Results

Search found 6881 results on 276 pages for 'storage spaces'.

Page 10/276 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Azure storage: Uploaded files with size zero bytes

    - by Fabio Milheiro
    When I upload an image file to a blob, the image is uploaded apparently successfully (no errors). When I go to cloud storage studio, the file is there, but with a size of 0 (zero) bytes. The following is the code that I am using: // These two methods belong to the ContentService class used to upload // files in the storage. public void SetContent(HttpPostedFileBase file, string filename, bool overwrite) { CloudBlobContainer blobContainer = GetContainer(); var blob = blobContainer.GetBlobReference(filename); if (file != null) { blob.Properties.ContentType = file.ContentType; blob.UploadFromStream(file.InputStream); } else { blob.Properties.ContentType = "application/octet-stream"; blob.UploadByteArray(new byte[1]); } } public string UploadFile(HttpPostedFileBase file, string uploadPath) { if (file.ContentLength == 0) { return null; } string filename; int indexBar = file.FileName.LastIndexOf('\\'); if (indexBar > -1) { filename = DateTime.UtcNow.Ticks + file.FileName.Substring(indexBar + 1); } else { filename = DateTime.UtcNow.Ticks + file.FileName; } ContentService.Instance.SetContent(file, Helper.CombinePath(uploadPath, filename), true); return filename; } // The above code is called by this code. HttpPostedFileBase newFile = Request.Files["newFile"] as HttpPostedFileBase; ContentService service = new ContentService(); blog.Image = service.UploadFile(newFile, string.Format("{0}{1}", Constants.Paths.BlogImages, blog.RowKey)); Before the image file is uploaded to the storage, the Property InputStream from the HttpPostedFileBase appears to be fine (the size of the of image corresponds to what is expected! And no exceptions are thrown). And the really strange thing is that this works perfectly in other cases (uploading Power Points or even other images from the Worker role). The code that calls the SetContent method seems to be exactly the same and file seems to be correct since a new file with zero bytes is created at the correct location. Does any one have any suggestion please? I debugged this code dozens of times and I cannot see the problem. Any suggestions are welcome! Thanks

    Read the article

  • Silverlight isolated storage: what identifies an "application"?

    - by Joe White
    The docs for Silverlight's IsolatedStorageFile.GetUserStoreForApplication just say that the isolated storage is specific to the "application", and that each different application will have its own storage independent of all other "applications" (but with one quota for the entire domain). That's great, but I haven't found anything yet that explains just what "application" is supposed to mean (either in the Silverlight docs or the regular .NET Framework docs). What information does Silverlight, in particular, use to decide that "this is application A" and "this is application B"? Does it just go off the URI to the .xap file, or what?

    Read the article

  • Access block level storage via kernel

    - by N 1.1
    How to access block level storage via the kernel (w/o using scsi libraries)? My intent is to implement a block level storage protocol over network for learning purpose, almost the same way SCSI works. Requests will be generated by initiator and sent to target (both userspace program) which makes call to kernel module and returns the data using TCP protocol to initiator. So far, I have managed to build a simple "Hello" module and run it (I am new at kernel programming), but unable to proceed with block access. After searching a lot, I found struct buffer_head * bread(int dev,int block) in linux/fs.h, but the compiler throws error. error: implicit declaration of function ‘bread’ Please help, also feel free to advice on starting with kernel programming. Thank you!

    Read the article

  • Storage for large gridded datasets

    - by nullglob
    I am looking for a good storage format for large, gridded datasets. The application is meteorology, and we would prefer a format that is common within this field (to help exchange data with others). I don't need to deal with special data structures, and there should be a Fortran API. I am currently considering HDF5, GRIB2 and NetCDF4. How do these formats compare in terms of data compression? What are their main limitations? How steep is the learning curve? Are there any other storage formats worth investigating? I have not found a great deal of material outlining the differences and pros/cons of these formats (there is one relevant SO thread, and a presentation comparing GRIB and NetCDF).

    Read the article

  • Are there any home/soho NAS devices that will backup/sync to the cloud?

    - by 3rdparty
    Looking for a home office (SOHO) market (priced) network hard drive (NAS) that will sync some or all of its content to a cloud-based backup service. The only option I've been able to find so far is NetGear's [ReadyNAS Vault][1] however from what I've read it's not as secure as it could be, and the service is quite expensive ($200/yr for 50GB of cloud storage) - it's 'powered' by ElephantDrive Ideally would love to see something like Wuala integrated into a Lacie Network HDD - conveniently, I suspect this is in the works as Lacie recently acquired Wuala, however nothing has come of it yet. I know there are options to use rsync with a customizable NAS (such as the very versatile and hackable D-Link DNS-323, but the easier this is to setup and maintain, the better. Thanks! ps. I had many links posted within this question, but was limited to posting with only one due to anti-spam restrictions - gotta get my 'reputation' higher!

    Read the article

  • MySQL unstable behavior with external stroge

    - by onurozcelik
    In our project we are using MySQL 5.0.90(InnoDB engine) server with an external storage. We store MySQL data files in an external storage. When the external storage down for a reason we have unstable behaviours. So we made some tests. In Windows Server 2008 We closed external storage physically. MySQL service stoped and we could not reach the server. Then we opened the storage unit and we could not start service until we reconfigured MySQL. We made storage unit offline from operating system. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online and we could start the service(not automatically). In Windows Server 2003 We made storage unit offline. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online we could not start the service until we reinstalled MySQL. Before reinstall we tried to reconfigure but it did not work. We closed external storage physically. MySQL service stoped and we could not reach the server. After that we opened the storage unit and we could start service(not automatically) We expect service to autostart after storage unit is online/open. But these tests show unstable behaviors. Is there any solutions to this.

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

  • XenServer 5.5 local storage problem

    - by Jason Nerer
    Hi community, I have the following problem with a Citrix XenServer 5.5. I had to physically move the host, so I shut down all machines via console: xe vm-shutdown force=true vm=my-machine-uuis-s After that I shut down the machine itself by issuing: halt After the reboot today the local storage repository is unplugged. I was trying to repair it via XenCenter, but I don't trust this one. So I tried: [root@xenserver ~]# xe pbd-list uuid ( RO) : ef6e2f3b-5825-393c-23e1-391d105c87ec host-uuid ( RO): c4bcf09c-2e52-448f-8210-df5d13bd33a9 sr-uuid ( RO): 2fb3be9c-075c-53ed-acb6-42f0c4ad0614 device-config (MRO): device: /dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83698154,/dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83694262 currently-attached ( RO): false To reattach the storage I issued: xe pbd-plug uuid=ef6e2f3b-5825-393c-23e1-391d105c87ec That one is running now for a while but not talking to me. The local repo has around 1TB. Should I wait, or are there any other options to reattach the local repo? What could have caused this problem? Any ideas? Thx. J

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • High Performance Storage Systems for SQL Server

    Rod Colledge turns his pessimistic mindset to storage systems, and describes the best way to configure the storage systems of SQL Servers for both performance and reliability. Even Rod gets a glint in his eye when he then goes on to describe the dazzling speed of solid-state storage, though he is quick to identify the risks.

    Read the article

  • Digital Storage for Airline Entertainment

    - by Bill Evjen
    by Thomas Coughlin Common flash memory cards The most common flash memory products currently in use are SD cards and derivative products (e.g. mini and micro-SD cards) Some compact flash used for professional applications (such as DSLR cameras) Evolution of leading flash formats Standardization –> market expansion Market expansion –> volume iNAND –> focus is on enabling embedded X3 iSSD –> ideal for thin form factor devices Flash memory applications Phones are the #1 user of flash memory Flash memory is used as embedded and removable storage in many mobile applications Flash memory is being used in computers as USB sticks and SSDs Possible use of flash memory in computer combined with HDDs (hybrid HDDs and paired or dual storage computers) It can be a removable card or an embedded card These devices can only handle a specific number of writes Flash memory reads considerably quicker than hard drives Hybrid and dual storage in computers SSDs can provide fast performance but they are expensive HDDs can provide cheap storage but they are relatively slow Combining some flash memory with a HDD can provide costs close to those of HDDs and performance close to flash memory Seagate Momentus XT hybrid HDD Various dual storage offerings putting flash memory with HDDs Other common flash memory devices USB sticks All forms and colors Used for moving files around Some sold with content on them (Sony Movies on USB sticks) Solid State Drives (SSDs) Floating Gate Flash Memory Cell When a bit is programmed, electrons are stored upon the floating gate This has the effect of offsetting the charge on the control gate of the transistor If there is no charge upon the floating gate, then the control gate’s charge determines whether or not a current flows through the channel A strong charge on the control gate assumes that no current flows. A weak charge will allow a strong current to flow through. Similar to HDDs, flash memory must provide: Bit error correction Bad block management NAND and NOR memories are treated differently when it comes to managing wear In many NOR-based systems no management is used at all, since the NOR is simply used to store code, and data is stored in other devices. In this case, it would take a near-infinite amount of time for wear to become an issue since the only time the chip would see an erase/write cycle is when the code in the system is being upgraded, which rarely if ever happens over the life of a typical system. NAND is usually found in very different application than is NOR Flash memory wears out This is expected to get worse over time Retention: Disappearing data Bits fade away Retention decreases with increasing read/writes Bits may change when adjacent bits are read Time and traffic are concerns Controllers typically groom read disturb errors Like DRAM refresh Increases erase/write frequency Application characteristics Music – reads high / writes very low Video – r high / writes very low Internet Cache – r high / writes low On airplanes Many consumers now have their own content viewing devices – do they need the airlines? Is there a way to offer more to consumers, especially with their own viewers Additional special content tie into airplane network access to electrical power, internet Should there be fixed embedded or removable storage for on-board airline entertainment? Is there a way to leverage personal and airline viewers and content in new and entertaining ways?

    Read the article

  • Google I/O 2012 - Powering Your Application's Data using Google Cloud Storage

    Google I/O 2012 - Powering Your Application's Data using Google Cloud Storage Navneet Joneja, Nathan Herring Since opening its doors to all developers at Google I/O last year, the Google Cloud Storage team has shipped several features that let you use Google Cloud Storage for a variety of advanced use cases. This session will open with a quick introduction to the product, and quickly shift focus to implementing a variety of advanced applications using new features in Google Cloud Storage. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 48 1 ratings Time: 58:32 More in Science & Technology

    Read the article

  • GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs

    GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs Speaker: Patrick Chanezon Track: Cloud Computing Time slot: F [15:30 - 16:15] Room: 2 Level: 101 Google is expanding our storage products by introducing Google Storage for Developers. It offers a RESTful API for storing and accessing data at Google. Developers can take advantage of the performance and reliability of Google's storage infrastructure, as well as the advanced security and sharing capabilities. We will demonstrate key functionality of the product as well as customer use cases. Google relies heavily on data analysis and has developed many tools to understand large datasets. Two of these tools are now available on a limited sign-up basis to developers: (1) BigQuery: interactive analysis of very large data sets and (2) Prediction API: make informed predictions from your data. We will demonstrate their use and give instructions on how to get access. From: GoogleDevelopers Views: 1 0 ratings Time: 39:27 More in Science & Technology

    Read the article

  • How to fix Failed to initialize Windows Azure storage emulator error

    - by ybbest
    When you press F5 to start debugging Azure project, you might get the following exception: If you go to the Output windows, you will see the detailed error message below: Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Development Storage: the SQL Server instance ‘localhost\SQLExpress’ could not be found. Please configure the SQL Server instance for Development Storage using the ‘DSInit’ utility in the Windows Azure SDK. This is because by default, Azure uses the SQLExpress to start Development Storage. To fix this you can do the following: You need to open command prompt, and navigate to C:\Program Files\Windows Azure SDK\v1.4\bin\devstore (depending on your Azure version, the file path is slightly different.) Next, run DSInit /sqlInstance:. (. Means the SQL Server use the default instance, if you have name instance, you need to change. to the name of the SQL Server) After a short while, you should see the following windows showing the configuration succeeds. You can download a batch file here. References: http://msdn.microsoft.com/en-us/library/gg433132.aspx

    Read the article

  • How to escape spaces in .desktop files Exec line

    - by nh2
    I want to make a .desktop file like described here. [Desktop Entry] Name=Sublime Text 2 GenericName=Sublime Text 2 Comment=Edit text files Exec=/home/user/opt/sublime/Sublime Text 2/sublime_text %U However, running that from Nautilus's context menu using Open with this gives me Could not find '/home/user/opt/sublime/Sublime' So I tried Exec="/home/user/opt/sublime/Sublime Text 2/sublime_text" %U and got Text ended before matching quote was found for ". (The text was '"/home/user/opt/sublime/Sublime') What is the correct way to escape spaces in the Exec line of .desktop files?

    Read the article

  • Changing default shortcut for Spaces on a Macbook Pro with Snow Leapord?

    - by Mahmoud Abdelkader
    I'm primarily a Linux user but my main home PC is a Macbook Pro with Snow Leopard. The concept of "Spaces" in a Mac is extremely important to how I utilize my development environments and I use this feature on Linux all the time. I use Compiz Manager for Ubuntu which allows me to "Desktop Switch" and navigate all desktops using the Ctrl+Alt+Arrow keys to a particular desktop. On a Mac, the process is similar, however, you're only given 3 non-customizable short-cuts: Option+Arrow, Ctrl+Arrow, and Alt+Arrow. How can I change the keyboard shortcut to take Ctrl+Option? The keys would then fit my working style and it won't feel awkward to remember two sets of keyboard shortcuts. Thanks! Mahmoud

    Read the article

  • Even More Storage Options in VDI 3.4.1

    - by mprove
    Oracle Virtual Desktop Infrastructure 3.4.1 has been released to complete the storage matrix below. Storage Type VirtualBox on Solaris VirtualBox on Enterprise Linux Sun ZFS yes yes Sun ZFS (pool on Solaris) yes yes iSCSI - new in VDI 3.4 Network File System new in VDI 3.4.1 new in VDI 3.4 Local Storage new in VDI 3.4.1 new in VDI 3.4

    Read the article

  • MySQL Exotic Storage Engines

    MySQL has an interesting architecture that allows you to plug in different modules to handle storage. What that means is that it's quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs. Sean Hull presents some of the newest and more exotic storage engines, and even some that are still in development.

    Read the article

  • MySQL Exotic Storage Engines

    MySQL has an interesting architecture that allows you to plug in different modules to handle storage. What that means is that it's quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs. Sean Hull presents some of the newest and more exotic storage engines, and even some that are still in development.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >