Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 23/271 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Data storage solutions for rapidly running out of space

    - by Grimlockz
    I have 2 web servers (1 live and other backup), the issue I have is our storage is rapidly running out. All the data on the server is used by our customers and new documents are uploaded to the server daily. So nothing can be deleted as it's always in use. We use a flat file structure with no database. I'm seeking solutions or ideas for the best place to move the our data to. The data has to be secure and needs to run on a linux environment. Not sure where to start - clusters, vmware, or they such solutions for huge file servers?

    Read the article

  • StorageWorks MSA60 and other storage related questions

    - by Mejmo
    Hi, I do not have deeper knowledge of the storage area, sorry for asking evidently stupid questions :) We are thinking about getting HP StorageWorks MSA60 for storing our VM. Do we need another DL server with controller so that we could use iSCSI ? Do we need to get some P800 controller for doing that? I cannot imagine how it is connected together actually ... MSA60-DLserver with p800 controller and servers that are running VM connected with iSCSI to this DL server ? Or MSA60 directly supports iSCSI so the DL server is not necessary ? What is inside this MSA60? Is it possible to install there OS ? Thank you.

    Read the article

  • Linqpad with Table Storage

    - by kaleidoscope
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x. var storageAccountName = "myStorageAccount";  // Enter valid storage account name var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key var uri = new System.Uri("http://table.core.windows.net/"); var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false); var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation // The query var query = from row in serviceContext.Table select row;         query.Dump(); Sarang, K

    Read the article

  • PARTNER WEBCAST (June 4): Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco

    - by Zeynep Koch
    Live Webcast: Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco A webcast for resellers who sell Oracle workloads to customers  Wednesday, June 4, 2014, 8:00 AM PDT /11 AM EDT  Register today Nimble Storage SmartStack™ for Oracle provides pre-validated reference architecture that speed deployments and minimize risk.  IT and Oracle administrators and architects realize the importance of underlying Operating System, Virtualization software, and Storage in maintaining services levels and staying in budget.  In this webinar, you will learn how Nimble Storage SmartStack for Oracle provides a converged infrastructure for Oracle database online transaction processing (OLTP) and online analytical processing (OLAP) environments with Oracle Linux and Oracle VM. SmartStack delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or if you are running Oracle Real Application Clusters (RAC) on multiple nodes. Nimble Storage SmartStack for Oracle with Cisco can help you provide: Improved Oracle performance Stress-free data protection and DR of your Oracle database Higher availability and uptime Accelerate Oracle development and improve testing All for dramatically less than what you’re paying now Presenters: Doan Nguyen, Senior Principal Product Marketing Director, Oracle Vanessa Scott , Business Development Manager, Cisco Ibrahim “Ibby” Rahmani, Product and Solutions Marketing, Nimble Storage Join this event to learn from our Nimble Storage and Oracle experts on how to optimize your customers' Oracle environments. Register today to learn more!

    Read the article

  • PARTNER WEBCAST (June 4): Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco

    - by Zeynep Koch
    Live Webcast: Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco A webcast for resellers who sell Oracle workloads to customers  Wednesday, June 4, 2014, 8:00 AM PDT /11 AM EDT  Register today Nimble Storage SmartStack™ for Oracle provides pre-validated reference architecture that speed deployments and minimize risk.  IT and Oracle administrators and architects realize the importance of underlying Operating System, Virtualization software, and Storage in maintaining services levels and staying in budget.  In this webinar, you will learn how Nimble Storage SmartStack for Oracle provides a converged infrastructure for Oracle database online transaction processing (OLTP) and online analytical processing (OLAP) environments with Oracle Linux and Oracle VM. SmartStack delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or if you are running Oracle Real Application Clusters (RAC) on multiple nodes. Nimble Storage SmartStack for Oracle with Cisco can help you provide: Improved Oracle performance Stress-free data protection and DR of your Oracle database Higher availability and uptime Accelerate Oracle development and improve testing All for dramatically less than what you’re paying now Presenters: Doan Nguyen, Senior Principal Product Marketing Director, Oracle Vanessa Scott , Business Development Manager, Cisco Ibrahim “Ibby” Rahmani, Product and Solutions Marketing, Nimble Storage Join this event to learn from our Nimble Storage and Oracle experts on how to optimize your customers' Oracle environments. Register today to learn more!

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Squid on an Azure VM

    - by LantisGaius
    I can't get it to work. Here's exactly what I did: Create a new Azure VM, Windows Server 2012. RDP to the new VM Download & Extract Squid for Windows (2.7.STABLE8) Rename the conf files (squid, mime & cachemgr) Add the following lines on the end of squid.conf auth_param basic program c:/squid/libexec/ncsa_auth.exe c:/squid/etc/passwd.txt auth_param basic children 5 auth_param basic realm Welcome to http://abcde.fg Squid Proxy! auth_param basic credentialsttl 12 hours auth_param basic casesensitive off acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users Use http://www.htaccesstools.com/htpasswd-generator-windows/ to create passwd.txt Test passwd.txt via c:/squid/libexec/ncsa_auth.exe c:/squid/etc/passwd.txt (success) squid -z squid -i net start squid (No errors so far). go to https://manage.windowsazure.com, Virtual Machines - myVM - Endpoints Add Endpoint: Name: Squid Protocol: TCP Public Port: 80 Private Port: 3128 That's it. Unfortunately, it doesn't work. I think I screwed something up at the endpoint? I'm not sure.. help? EDIT: I'm testing it via Firefox - Options - Advanced - Network, and the exact error is "The Proxy Server is refusing connections." I'm using my DNS as the Proxy server "abcdef.cloudapp.net" and port 80 (since that's my public endpoint).

    Read the article

  • Azure load-balancing strategy

    - by growse
    I'm currently building out a small web deployment using VM instances on MS Azure. The main problem I'm facing at the moment is trying to figure out how to get the load-balancing to detect if a particular VM has failed and not route traffic to that VM. As far as I can tell, there are only only two load-balancing options: Have multiple VMs (web01, web02, web03 etc.) within the same 'cloud service' behind a single VIP, and configure the endpoints to be load balanced. Create multiple 'cloud services', put a single web VM in each and create a traffic manager service across all these services. It appears that (1) is extremely simplistic and doesn't attempt to do any host failure detection. (2) appears to be much more varied, but requires me to put all my webservers in their own individual cloud service. Traffic manager appears to be much more directed at a geographic failover scenario, where you have multiple cloud services across different regions. This approach also has the disadvantage in that my web servers won't be able to communicate with my databases on internal IP addresses, unlike scenario (1). What's the best approach here?

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • Q&amp;A: Can you develop for the Windows Azure Platform using Windows XP?

    - by Eric Nelson
    This question has come up several times recently as we take several hundred UK developers through 6 Weeks of Windows Azure training (sorry – we are full). Short answer: In the main, yes Longer answer: The question is sparked by the requirements as stated on the Windows Azure SDK download page. Namely: Supported Operating Systems: Windows 7; Windows Vista; Windows Vista 64-bit Editions Service Pack 1; Windows Vista Business; Windows Vista Business 64-bit edition; Windows Vista Enterprise; Windows Vista Enterprise 64-bit edition; Windows Vista Home Premium; Windows Vista Home Premium 64-bit edition; Windows Vista Service Pack 1; Windows Vista Service Pack 2; Windows Vista Ultimate; Windows Vista Ultimate 64-bit edition Notice there is no mention of Windows XP. However things are not quite that simple. The Windows Azure Platform consists of three released technologies Windows Azure SQL Azure Windows Azure platform AppFabric The Windows Azure SDK is only for one of the three technologies, Windows Azure. What about SQL Azure and AppFabric? Well it turns out that you can develop for both of these technologies just fine with Windows XP: SQL Azure development is really just SQL Server development with a few gotchas – and for local development you can simply use SQL Server 2008 R2 Express (other versions will also work). AppFabric also has no local simulation environment and the SDK will install fine on Windows XP (SDK download) Actually it is also possible to do Windows Azure development on Windows XP if you are willing to always work directly against the real Azure cloud running in Microsoft datacentres. However in practice this would be painful and time consuming, hence why the Windows Azure SDK installs a local simulation environment. Therefore if you want to develop for Windows Azure I would recommend you either upgrade from Windows XP to Windows 7 or… you use a virtual machine running Windows 7. If this is a temporary requirement, then you could consider building a virtual machine using the Windows 7 Enterprise 90 day eval. Or you could download a pre-configured VHD – but I can’t quite find the link for a Windows 7 VHD. Pointers welcomed. Thanks.

    Read the article

  • USB drives not recognized all of a sudden

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. lsusb results Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful. Thanks.

    Read the article

  • New SQL Azure Development Accelerator Core promotional offer announced

    - by Eric Nelson
    This is (almost) a straight copy and paste but represents an important announcement worthy of a little more “exposure” :-) Starting August 1, 2010, we will release a new SQL Azure Development Accelerator Core promotional offer.  This new offer will give you the flexibility to purchase commitment quantities of SQL Azure Business Edition databases independent of other Windows Azure platform services at a deeply discounted monthly price.  The offer is valid only for a six month term.  You may purchase in 10 GB increments the amount of our Business Edition relational database that you require (each Business Edition database is capable of storing up to 50 GB).  The offer price will be $74.95 per 10 GB per month.  This promotional offer represents 25% off of our normal consumption rates.  Monthly Business Edition relational database usage exceeding the purchased commitment amount and usage for other Windows Azure platform services for this offer will be charged at our normal consumption rates.  Please click here for full details of our new SQL Azure Development Accelerator Core offer.  Related Links: Details of 5GB and 50GB databases have been released http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • New partnership allows auto-transposition of client/server application to Windows Azure

    - by Webgui
    The economics of IT is changing rapidly, and organizations are searching to widen and secure availability of their systems and at the same time lower costs which is exactly what the cloud meant to do. Running your systems on Microsoft’s Windows Azure cloud for example would improve and secure the availability, accessibility and scalability (both up and down) of your systems and support the new IT economics. However, in order to take advantage of the cloud's promise of lower cost of ownership, the applications must be built or adjusted to work on that platform and in most cases this is not a simple task.  Even existing web applications cannot always be transferred to Azure without some changes, and for client/server applications, the task is way more challenging even to the point where it seems impossible. The reason is the gaps between the client/server desktop technology and the cloud's. For that reason, most of the known methodologies to migrate existing client/server applications actually involve rewrite of the desktop systems for the cloud. A unique approach is introduced by Visual WebGui which creates a virtualization layer atop ASP.Net web server, it moves the transformed or generated .Net code to that layer, and then using a patent pending protocol it renders a user interface within a plain browser. The end result is pure .NET code that is a base code for a pure rich web application and now due to a collaboration with Microsoft Windows Azure Visual WebGui provides the shortest path from client/server to the Azure cloud by being able to handle close to 95% of the transformation to the cloud platform in an automatic way. Application Migration to Azure without migraines More information about the Instant CloudMove Azure solution here.

    Read the article

  • Windows Azure Event

    - by Blog Author
    Get cloud ready with Windows Azure The cloud is everywhere and here at Microsoft we’re flying high with our cloud computing release, Windows Azure. As most of you saw at the Professional Developers Conference, the reaction to Windows Azure has been nothing short of “wow” – and based on your feedback, we’ve organized this special, all-day Windows Azure Firestarter event to help you take full advantage of the cloud. Maybe you've already watched a webcast, attended a recent MSDN Event on the topic, or done your own digging on Azure. Well, here's your chance to go even deeper. This one-of-a-kind event will focus on helping developers get ‘cloud ready’ with concrete details and hands-on tactics. We’ll start by revealing Microsoft’s strategic vision for the cloud, and then offer an end-to-end view of the Windows Azure platform from a developer’s perspective. We’ll also talk about migrating your data and existing applications (regardless of platform) onto the cloud. We’ll finish up with an open panel and lots of time to ask questions. Following this event, please join us for an engaging conversation about any and all Cloud Computing topics. This FREE event is hosted by Northwest Cloud, the cloud agnostic community group, and sponsored by Microsoft. http://www.nwcloud.org/redmond/2010-04-06

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • Windows Azure v1.7 Spring Release Today&ndash;New Management Dashboard

    - by ToStringTheory
    Today, Microsoft will be publicly releasing a new version of Azure for public consumption.  The web conference, at http://www.meetwindowsazure.com will be airing at 1 PM PST.  They have already released an update to the Service Dashboard that can be accessed by going to http://manage.windowsazure.com.  I have some images of the new dashboard here that I have gathered and removed any PII from.  Let me know what you think! Images You should be able to click any of the images for a full resolution image. Tutorial The first thing you get after signing in is the tutorial: Landing After the tutorial completes, you get a screen with services that are active on your account on the left, and a list of ALL services (db/blob/SQL Azure) on the right.  I like the quick access to services across any of my subscriptions: Service Information These are images from a running web site with several roles.  I love how easy they have made many of the features: SQL Azure They have given some great quick functionality for looking at your DB information: Storage Here is the basic information that they give you for any storage accounts you have: Adding Services Super quick and easy to add services with the new UI: Conclusion I am EXCITED!  As you may have seen in the left side of my blog, I am an MCPD in Azure Development, and I must say that I am excited to see Microsoft moving forward with the technology and not letting it stagnate.  After as much as I have fought the other Azure dashboard, I like the friendliness and fluidity of this one. The important thing to note about ALL of the images above: this is HTML, not Silverlight.  The responsiveness is FAST on all of the actions I completed, and I believe that this is a big step forward for Azure… So, what do you think?

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Windows Azure Platform, latest version?

    - by Vimvq1987
    I searched through internet but found nothing. The whitepapers of Windows Azure Platform say something like that: In its first release, the maximum size of a single database in SQL Azure Database is 10 gigabytes A few things are omitted in the technology’s first release, however, such as the SQL Common Language Runtime (CLR) and support for spatial data. (Microsoft says that both will be available in a future version.) I want to know that Microsoft had updated Windows Azure Platform and removed these limits or not? I decided to post this question here instead of Serverfault.com because it's more relative to programming than administration. Thank you

    Read the article

  • Windows Azure: need to know the data processing time

    - by veda
    I have stored some files in the form of blobs on azure and I have written an application that would access these blobs. When I host this application as a web role on azure, it works perfectly and I am happy with that. But now, I wanted to know “what is the query time taken to access each blob file?” I was searching for this through the Microsoft Azure Storage SLA and I found that for GetBlob request type, the maximum processing time should be within the product of 2 seconds multiplied by the number of MBs transferred in processing the request. I am still unclear. What is the actual processing time of my data query? How can I measure it? Can I be able to speed up the processing time? I can understand that the processing time depends on internet speed, location of the data center where my data is being stored, and location of data center where my application is being hosted. But still, will I be able to speed up my query?

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >