Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 26/238 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • Reasons to disable game save during combat (e.g. Mass Effect 2)

    - by Steve V.
    So I've been playing Mass Effect 2 (PC) and one of the things I've noticed is that you can only save your game when you're not engaged in combat. As soon as the first enemy shows up on your radar, the save button is disabled. Once combat is over, save functionality reappears. It seems reasonable to assume that Mass Effect 2 is a state machine, and therefore, the internal state of the program at any moment can be captured and reloaded later. This is basically a solved problem - games have been designed this way since the Half-Life era. It also seems reasonable to assume that BioWare knew what they were doing when they made the decision not to follow this model - it's a tried and true system; BioWare wouldn't have done it the way they did without some good reason. What reasons are there to disable game save functionality during combat?

    Read the article

  • Mass saving xls as csv

    - by korki
    hi, here's the trick. gotta convert 'bout 300 files from xls to csv, wrote some simple macro to do it, here's the code: Dim wb As Workbook For Each wb In Application.Workbooks wb.Activate Rows("1:1").Select Selection.Delete Shift:=xlUp ActiveWorkbook.SaveAs Filename:= _ "C:\samplepath\CBM Cennik " & ActiveWorkbook.Name & " 2010-04-02.csv" _ , FileFormat:=xlCSV, CreateBackup:=False Next wb but it doesn't do exactly what i want - saves file "example.xls" as "example.xls 2010-04-02.csv", what i need is "example 2010-04-02.csv" need support guys ;)

    Read the article

  • Using regular expressions to do mass replace in Notepad++

    - by user638820
    I've been trying to replace (and translate) this text, and i don't know what formula I should Use for thousands of places that I need to translate to Spanish. OKay this is what i want to do, i want to use regular expressions on Notepadd++. I give 4 variations, and in bold is what's supposed to go after the name of the place, in lower case and not to be confused with eg. Agency Village because that's its name. Missouri 5,988,927 Adrian City city 1,677 Advance city 1,347 Affton CDP 20,307 Agency Village village 684 Airport Drive village 698 To | [[Adrian City (Misuri)|Adrian City]] || ciudad || 1677 |- | [[Advance (Misuri)|Advance]] || ciudad || 1347 |- | [[Afton (Misuri)|Afton]] || CDP || 20307 |- | [[Agency Village (Misuri)|Agency Village]] || villa || 684 |- | [[Airport Drive (Misuri)|Airport Drive]] || villa || 698

    Read the article

  • Fastest way to do mass update

    - by user356004
    Let’s say you have a table with about 5 million records and a nvarchar(max) column populated with large text data. You want to set this column to NULL if SomeOtherColumn = 1 in fastest possible way. The brute force UPDATE does not work very well here because it will create large implicit transaction and take forever. Doing updates in small batches of 50K records at a time works but it’s still taking 47 hrs to complete on beefy 32 core/64GB server. Is there any way to do this update faster? Are there any magic query hints/table options that scarifies something else (like concurrency) in exchange of speed? NOTE: Creating temp table or temp column is not an option because this nvarchar(max) column involves lots of data and so consumes lots of space!

    Read the article

  • JSF : able to do mass update but unable to update a single row in a datatable

    - by nash
    I have a simple data object: Car. I am showing the properties of Car objects in a JSF datatable. If i display the properties using inputText tags, i am able to get the modified values in the managed bean. However i just want a single row editable. So have placed a edit button in a separate column and inputText and outputText for every property of Car. the edit button just toggles the rendering of inputText and outputText. Plus i placed a update button in a separate column which is used to save the updated values. However on clicking the update button, i still get the old values instead of the modified values. Here is the complete code: public class Car { int id; String brand; String color; public Car(int id, String brand, String color) { this.id = id; this.brand = brand; this.color = color; } //getters and setters of id, brand, color } Here is the managed bean: import java.util.ArrayList; import java.util.List; import javax.faces.bean.ManagedBean; import javax.faces.bean.RequestScoped; import javax.faces.component.UIData; @ManagedBean(name = "CarTree") @RequestScoped public class CarTree { int editableRowId; List<Car> carList; private UIData myTable; public CarTree() { carList = new ArrayList<Car>(); carList.add(new Car(1, "jaguar", "grey")); carList.add(new Car(2, "ferari", "red")); carList.add(new Car(3, "camri", "steel")); } public String update() { System.out.println("updating..."); //below statments print old values, was expecting modified values System.out.println("new car brand is:" + ((Car) myTable.getRowData()).brand); System.out.println("new car color is:" + ((Car) myTable.getRowData()).color); //how to get modified row values in this method?? return null; } public int getEditableRowId() { return editableRowId; } public void setEditableRowId(int editableRowId) { this.editableRowId = editableRowId; } public UIData getMyTable() { return myTable; } public void setMyTable(UIData myTable) { this.myTable = myTable; } public List<Car> getCars() { return carList; } public void setCars(List<Car> carList) { this.carList = carList; } } here is the JSF 2 page: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form id="carForm" prependId="false"> <h:dataTable id="dt" binding="#{CarTree.myTable}" value="#{CarTree.cars}" var="car" > <h:column> <h:outputText value="#{car.id}" /> </h:column> <h:column> <h:outputText value="#{car.brand}" rendered="#{CarTree.editableRowId != car.id}"/> <h:inputText value="#{car.brand}" rendered="#{CarTree.editableRowId == car.id}"/> </h:column> <h:column> <h:outputText value="#{car.color}" rendered="#{CarTree.editableRowId != car.id}"/> <h:inputText value="#{car.color}" rendered="#{CarTree.editableRowId == car.id}"/> </h:column> <h:column> <h:commandButton value="edit"> <f:setPropertyActionListener target="#{CarTree.editableRowId}" value="#{car.id}" /> </h:commandButton> </h:column> <h:column> <h:commandButton value="update" action="#{CarTree.update}"/> </h:column> </h:dataTable> </h:form> </h:body> </html> However if i just keep the inputText tags and remove the rendered attributes, i get the modified values in the update method. How can i get the modified values for the single row edit?

    Read the article

  • Efficient mass string search problem.

    - by Monomer
    The Problem: A large static list of strings is provided. A pattern string comprised of data and wildcard elements (* and ?). The idea is to return all the strings that match the pattern - simple enough. Current Solution: I'm currently using a linear approach of scanning the large list and globbing each entry against the pattern. My Question: Are there any suitable data structures that I can store the large list into such that the search's complexity is less than O(n)? Perhaps something akin to a suffix-trie? I've also considered using bi- and tri-grams in a hashtable, but the logic required in evaluating a match based on a merge of the list of words returned and the pattern is a nightmare, and I'm not convinced its the correct approach.

    Read the article

  • Track mass email campaigns

    - by daeliur
    Litmus released an email analytics service last month (may 2010). See here: http://litmusapp.com/email-analytics They boast a very cool "read rate" tracking: they can track normal reads, Skims, and Glanced/Deleted. How can they track skims and glanced/deleted? This to me seems impossible :) They also track forwards and prints. Prints are easy (they include a css @media print query with a bg image). But forwards? I think this might be a combo between subsequent opens and different IPs/reffering URLs. However, this means that if I open my mail and re-read it from another computer, it counts as a forward. Any ideas on this one? To summarize: Litmus Email Analytics says they can track email reads, skims, glanced/deleted, prints and forwards. How do they do it (skims, glanced/deleted and forwards)?

    Read the article

  • How is thread local storage (__thread) implemented on LInux?

    - by anon
    __thread Foo foo; How is "foo" actually resolved? Does the compiler silently replace every instance of "foo" with a function call? Is "foo" stored somewhere relative to the bottom of the stack, and the compiler stores this as "hey, for each thread, have this space near the bottom of the stack, and foo is stored as "offset x from bottom of stack"" ? Insights please.

    Read the article

  • Excel Hyperlink mass update

    - by IMHO
    I have a spreadsheet with thousands of rows. Each row contains a hyperlink with a path. The path is not valid, however easily fixable by replacing first part of it with correct value. Example: current hyperlink: F:\Help\index.html Needed: P:\SystemHelp\index.html The problem is that standard Find/Replace does not "see" content of hyperlinks. Is the only way to write a macro or is there another way to do it?

    Read the article

  • Can't mass-assign protected attributes: user

    - by Ben Aluan
    I'm working on a simple app that requires me to submit a form. I created two models. user.rb class User < ActiveRecord::Base attr_accessible :email has_many :item end item.rb class Item < ActiveRecord::Base attr_accessible :user_id belongs_to :user end Instead of creating a user using the user form view, I'm trying to create the user using the item form view. items/_form.html.haml = nested_form_for @item do |form| = form.fields_for :user do |builder| = builder.text_field :email = form.submit "Save" Did I miss something here? I'm using nested_form_for btw. Thank you.

    Read the article

  • mass add time in a file

    - by atinder
    i have a file in which time appears nearly hundred times like 00:01:32 00:01:33 00:01:36 ....................... how can i add 2 seconds or 2 minutes to all the times in the file so that i get 00:01:34 00:01:35 00:01:38 ..................

    Read the article

  • Mass 301 redirects mvc

    - by Massive Boisson
    Hi (this is a common problem with few twists). We are on IIS 6, without ISAPI_Rewrite with .net 4 and mvc 2 app. We would like to 301 forward bunch of old urls to matching new urls. (We are deploying a new site and google index will still point to old urls, and we need to map those urls to new urls) a) Is there a way to do this through IIS (but without ISAPI_Rewrite) and how? b) Is there a way to do this through the application code and how? All answers really appreciated, Thanks a lot --MB

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • Pi help with Php (mass looping)

    - by Pieman
    My primary question is: Is this alot of loops? while ($decimals < 50000 and $remainder != "0") { $number = floor($remainder/$currentdivider); //Always round down! 10/3 =3, 10/7 = 1 $remainder = $remainder%$currentdivider; // 10%3 =1, 10%1 $thisnumber = $thisnumber . $number; $remainder = $remainder . 0; //10 $decimals += 1; } Or could I fit more into it? -without the server crashing/lagging. I'm just wondering, Also is there a more effiecent way of doing the above? (e.g. finidng out that 1/3 = 0.3 to 50,000 decimals.) Finally: I'm doing this for a pi formulae the (1 - 1/3 + 1/5 - 1/7 etc.) one, And i'm wondering if there is a better one. (In php) I have found one that finds pi to 2000 in 4 seconds. But thats not what I want. I want an infinite series that converges closer to Pi so every refresh, users can view it getting closer... see: http://zombiewrath.com/pi.php (Old one) and 'zombiewrath.com/superpi.php' (Newer one) But obv. converging using the above formulae takes ALONG time. Is there any other 'loop' like Pi formulaes (workable in php) that converge faster? Thanks alot...

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 2

    - by pinaldave
    Earlier I had ran puzzle where I asked question regarding size of index table for each index in database over here SQL SERVER – Size of Index Table – A Puzzle to Find Index Size for Each Index on Table. I had received good amount answers and I had blogged about that here SQL SERVER – Size of Index Table for Each Index – Solution. As a comment to that blog I have received another very interesting comment and that provides near accurate answers to original question. Many thanks to Rama Mathanmohan for providing wonderful solution. SELECT OBJECT_NAME(i.OBJECT_ID) AS TableName, i.name AS IndexName, i.index_id AS IndexID, 8 * SUM(a.used_pages) AS 'Indexsize(KB)' FROM sys.indexes AS i JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id JOIN sys.allocation_units AS a ON a.container_id = p.partition_id GROUP BY i.OBJECT_ID,i.index_id,i.name ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id Let me know if you have any better script for the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Data Storage, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Reference Configuration: Accelerate Deployment of Virtual Infrastructure

    - by monica.kumar
    Today, Oracle announced the availability of Oracle VM blade cluster reference configuration based on Sun servers, storage and Oracle VM software. Assembling and integrating software and hardware systems from different vendors can be a huge barrier to deploying virtualized infrastructures as it is often a complicated, time-consuming, risky and expensive process. Using this tested configuration can help reduce the time to configure and deploy a virtual infrastructure by up to 98% as compared to putting together multi-vendor configurations. Once ready, the infrastructure can be used to easily deploy enterprise applications in a matter of minutes to hours as opposed to days/weeks, by using Oracle VM Templates. Find out more: Press Release Business whitepaper Technical whitepaper

    Read the article

  • Introduction to Oracle’s New StorageTek SL150 Modular Tape Library

    - by Cinzia Mascanzoni
    Join the product announcement webcast on Thursday July 12, 2012 at 3pm CET (2pm GMT). This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast.Register NOW!

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >