Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 23/222 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Neue Marketing Kits für Hardware

    - by A&C Redaktion
    Zur Vertriebsunterstützung gibt es jetzt auch Oracle Marketing Kits in Deutsch für folgende Hardware-Lösungen: Server & Storage: Improve Database Capacity Management with Oracle Storage and Hybrid Columnar Compression Server & Storage: Accelerating Database Test & Development with Sun ZFS Storage Appliance Server & Storage: Upgrade SAN Storage to Oracle Pillar Axiom Server & Storage: SPARC Refresh with Oracle Solaris Operating System Server & Storage: SPARC Server Refresh: The Next Level of Datacenter Performance with Oracle’s New SPARC Servers Server & Storage: Oracle Server Virtualization Server & Storage: Oracle Desktop Virtualization

    Read the article

  • New Marketing Assets Available

    - by swalker
    NEW translated demand generation materials available for the following Oracle Marketing Kits, designed to help partners generate sales around Oracle's solutions: Server & Storage: Improve Database Capacity Management with Oracle Storage and Hybrid Columnar Compression Server & Storage: Accelerating Database Test & Development with Sun ZFS Storage Appliance Server & Storage: Upgrade SAN Storage to Oracle Pillar Axiom Server & Storage: SPARC Refresh with Oracle Solaris Operating System Server & Storage: SPARC Server Refresh: The Next Level of Datacenter Performance with Oracle’s New SPARC Servers Server & Storage: Oracle Server Virtualization Server & Storage: Oracle Desktop Virtualization

    Read the article

  • Windows Azure Table Storage Error when in the cloud

    - by Dan Jones
    hey, I downloaded the simple table storage sample on Cloudy in Seattle blog It works perfect when aimed at local storage but when I change to point to azure storage i get the following error Screenshot (on skydrive) http://cid-00341536d0f91b53.skydrive.live.com/self.aspx/.Public/error.png anyone see this before? thanks guys

    Read the article

  • Neue Marketing Kits für Hardware

    - by A&C Redaktion
    Die Oracle Marketing-Kit sind ein beliebtes Instrument zur Vertriebsunterstützung. Stetig erweitert enthalten sie den Textentwurf für Emailing, Landigpad und ein Telemarketing-Script. Jetzt sind brandneue Kits u.a. in Deutsch für folgende Hardware-Lösungen verfügbar: Server & Storage: Improve Database Capacity Management with Oracle Storage and Hybrid Columnar Compression Server & Storage: Accelerating Database Test & Development with Sun ZFS Storage Appliance Server & Storage: Upgrade SAN Storage to Oracle Pillar Axiom Server & Storage: SPARC Refresh with Oracle Solaris Operating System Server & Storage: SPARC Server Refresh: The Next Level of Datacenter Performance with Oracle’s New SPARC Servers Server & Storage: Oracle Server Virtualization Server & Storage: Oracle Desktop Virtualization

    Read the article

  • Create a public folder in internal storage

    - by user2516977
    sorry for my english, but i want to write in this forum because in my opinion is the best. Now my problem: I want to create a folder in Internal storage to share with 2 application. in my app, i download an Apk from my server and i run it. Before I used external storage and everything worked. Now i want to use the internal storage for users that don't have an external storage. I use this: String folderPath = getFilesDir() + "Dir" but when i try to run the apk, doesn't work. and i can't find this folder on my phone. Thank you..

    Read the article

  • Windows Azure Error: Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found

    - by DigiMortal
    When running some of your Windows Azure applications when storage emulator is not configured you may get the following error: "Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found.   Please configure the SQL Server instance for Storage Emulator using the ‘DSInit’ utility in the Windows Azure SDK.". Here’s how to solve this problem. You need to run DSInit utility to create database. For Windows Azure SDK 1.6 the location for DSInit utility is: C:\Program Files\Windows Azure Emulator\emulator\devstore By default DSInit expects that your database server is (local)\SQLEXPRESS but you can change it easily. If you have MSSQL instance called SQLEXPRESS then it is enough to just run DSInit. If you need some other instance then run the following command: DSInit /sqlinstance:<instance name> For default instance use “.” as instance name: DSInit /sqlinstance:. You can find more information about sqlinstance and other switches from DSInit documentation. If database was correctly created you should see dialog like this: When storage database is ready you can run your application.

    Read the article

  • 1 Million IOPS

    - by GrumpyOldDBA
    As a keen follower of storage performance I couldn't help but be drawn to this article in The Register http://www.theregister.co.uk/2010/04/14/lsi_million_iops/ this morning. I gave my 5 year old laptop a new lease of life with a SSD and in combination with the old drive made external managed to reduce the time of a demo query from 50 odd mins down to 6 mins. I also have 4 Silicon Power 32GB SSDs set up as a raid 0 on my home server, an overblown PC. http://www.futurestorage.co.uk/index.asp?selmanuf...(read more)

    Read the article

  • How many disk should I use to meed the capacity and IOPS need?

    - by facebook-100005613813158
    An application needs 1.6 TB of storage capacity and performs 1000 IOPS. How many disks are required to meet the application requirement and offer acceptable response time? The disk specifications are as follows: - Drive capacity = 100 GB - 15K RPM - Each disk can perform 50 IOPS 4 candidate , 10,12,16,20, which one is the most likely answer? in my opinion,16 disks can only meet the capacity need ,but cannot meet the IOPS need, so ,the right answer should be 20 disks?right?

    Read the article

  • Pysdm has disabled my ability to write to my storage partition

    - by Atlas
    I have a dual boot setup with Windows 7 and Mint 13 Cinnamon. As well as their respective partitions I also have a large one (NTFS) for storing all my music, videos, documents etc. I downloaded pysdm as I was told it would enable me to configure Linux to auto-mount my storage partition. It has indeed been helpful in auto-mounting my storage. However, since installing it I can no longer write to the partition which makes 500GB of my hard drive utterly useless! I've tried to unselect the "Mount file system in read only mode" option, but the program keeps re-checking it after I close that window (and even when I click apply). Why is it doing this and how can I get it to recognise that I need to read AND write on that partition?

    Read the article

  • Mountable online storage (no syncing)

    - by Sam
    I have a Linux VPS that I would like to turn into a media server. Like most cheap VPS's, it has a fairly small storage capacity. What I would like to do is attach the box to an online backup system such as SpiderOak or DropBox, where the files would reside and be directly accessible to either a webserver or media server software. Since the VPS hdd is small, I do not want the files to be synced to it. I would like a storage system that is online only. Ideally mountable like a network drive. Are there any services that suit my needs, or workarounds for services such as SpiderOak that do not require syncing?

    Read the article

  • Bacula Director and Storage in LAN

    - by B14D3
    I have two networks LAN and DMZ.. Machines in DMZ are accesible from internet ( only over http). In LAN I have servers that see all LAN and all DMZ machines but machinse from DMZ don't see any LAN servers. Machines in LAN have access only to all LAN and DMZ, no direct access to internet and no access from internet. DMZ <------ LAN DMZ ----X--->LAN I'm planning to configure Bacula as major backup system. My plan is to install Bacula Director and Storage deamon on the same server in LAN for safety reasons. So my question is: Will this configuration work, is it posible for bacula director and storage deamon installed on server in LAN to makes backup servers that are in my DMZ? Or in this network configuration Bacula should be in DMZ? (If yes will I can backup with it servers in LAN ?)

    Read the article

  • Have both domain and non-domain users use NetApp CIFS storage

    - by zladuric
    My specific use case is that I have a NetApp CIFS storage that's in the domain, say, intranet. But I also have one hyper-v host that's not in the domain. I can't allow it into the domain, but I need to create guests whose VHD is on the storage. How can I achieve this? When I simply connect to a CIFS share and create a VHD there, it works, but when I try to add that VHD to the virtual machine, I get a "Failed to set folder permissions" error - probably due to folder ownership being in the domain admin. Is there a way around this? So both domain and non-domain users use the same directory?

    Read the article

  • Intel Rapid Storage Technology service always crashes

    - by Massimo
    I'm running Windows 7 x64 on a system based on an Asus Z87-Deluxe motherboard; the storage is configured for RAID mode; there is a single SSD drive for the O.S. and two 4-TB disks in a RAID 1 setup for the data. I've installed the latest version of Intel's Rapid Storage Technology drivers, 12.8.0.1016. The program complains about its service not being running, and the service is actually stopped; if I try to start it, it crashes. I've already tried reinstalling the package, but nothing changed. All the disks work correctly, but the RST program is unusable. How can I fix this?

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • How is thread local storage (__thread) implemented on LInux?

    - by anon
    __thread Foo foo; How is "foo" actually resolved? Does the compiler silently replace every instance of "foo" with a function call? Is "foo" stored somewhere relative to the bottom of the stack, and the compiler stores this as "hey, for each thread, have this space near the bottom of the stack, and foo is stored as "offset x from bottom of stack"" ? Insights please.

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 2

    - by pinaldave
    Earlier I had ran puzzle where I asked question regarding size of index table for each index in database over here SQL SERVER – Size of Index Table – A Puzzle to Find Index Size for Each Index on Table. I had received good amount answers and I had blogged about that here SQL SERVER – Size of Index Table for Each Index – Solution. As a comment to that blog I have received another very interesting comment and that provides near accurate answers to original question. Many thanks to Rama Mathanmohan for providing wonderful solution. SELECT OBJECT_NAME(i.OBJECT_ID) AS TableName, i.name AS IndexName, i.index_id AS IndexID, 8 * SUM(a.used_pages) AS 'Indexsize(KB)' FROM sys.indexes AS i JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id JOIN sys.allocation_units AS a ON a.container_id = p.partition_id GROUP BY i.OBJECT_ID,i.index_id,i.name ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id Let me know if you have any better script for the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Data Storage, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Reference Configuration: Accelerate Deployment of Virtual Infrastructure

    - by monica.kumar
    Today, Oracle announced the availability of Oracle VM blade cluster reference configuration based on Sun servers, storage and Oracle VM software. Assembling and integrating software and hardware systems from different vendors can be a huge barrier to deploying virtualized infrastructures as it is often a complicated, time-consuming, risky and expensive process. Using this tested configuration can help reduce the time to configure and deploy a virtual infrastructure by up to 98% as compared to putting together multi-vendor configurations. Once ready, the infrastructure can be used to easily deploy enterprise applications in a matter of minutes to hours as opposed to days/weeks, by using Oracle VM Templates. Find out more: Press Release Business whitepaper Technical whitepaper

    Read the article

  • Introduction to Oracle’s New StorageTek SL150 Modular Tape Library

    - by Cinzia Mascanzoni
    Join the product announcement webcast on Thursday July 12, 2012 at 3pm CET (2pm GMT). This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast.Register NOW!

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >