Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 16/271 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Server 2012 Storage Pools, Raid Controller... can the Storage Pool deal with it?

    - by TomTom
    Before trying it out - I don't find any documentation. Given that Storage Pools have serious performance problems with parity, and do not rebalance data at the moment when you add discs, my preferred way to use them would be as think provisioned space, ISCSI targets - with every "Pool" running against 1 RAID that comes from a Raid controller (who also introduces SSD read and write caching - another thing missing from Storage Pools). The main question is - how does a Storage Pool handle the change in the underlying disc that can happen? I mostly talk about OCE (Online Capacity Expansion), where a disc after an expansion suddenly reports a larger space. Standard Windows allows you to use this additional space (and expand the partitions). How does a storage pool handle it?

    Read the article

  • Why is hosted storage so expensive?

    - by Mark Henderson
    There are many questions on Server Fault asking why server storage is so expensive. e.g. Why do I have to pay 50 bucks a month per extra gigabyte of storage or Our file server is always running out of space, why doesn't our sysadmin just throw an extra 1TB drive in there? These questions usually come from people who lack an understanding of how enterprise-level storage works and what influences the price. This question is designed to be the "question to end all questions" regarding the price of enterprise storage.

    Read the article

  • Microsoft met à jour Windows Azure Web Sites et Azure Active Directory pour la gestion d'identité et l'hébergement Web dans le Cloud

    Microsoft met à jour Windows Azure Web Sites et Azure Active Directory pour l'hébergement Web et la gestion d'identité dans le Cloud Microsoft par la voix de Scott Guthrie, Vice-président de la division serveur et Business Tools, vient d'annoncer une mise à jour de Windows Azure Web Sites ainsi que du service Windows Azure Active Directory (WAAD). [IMG]http://ftp-developpez.com/gordon-fowler/windowsazurelogo.jpg[/IMG] Azure Web Sites est une plateforme d'hébergement des sites et applications Web dans le Cloud Azure. L'objectif de l'infrastructure Azure Web Sites est de rendre l'hébergement disponible à la fois sur le Cloud et en local sur les serv...

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • Windows Azure Recipe: Consumer Portal

    - by Clint Edmonson
    Nearly every company on the internet has a web presence. Many are merely using theirs for informational purposes. More sophisticated portals allow customers to register their contact information and provide some level of interaction or customer support. But as our understanding of how consumers use the web increases, the more progressive companies are taking advantage of social web and rich media delivery to connect at a deeper level with the consumers of their goods and services. Drivers Cost reduction Scalability Global distribution Time to market Solution Here’s a sketch of how a Windows Azure Consumer Portal might be built out: Ingredients Web Role – this will host the core of the solution. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control (optional) – if identity needs to be tracked within the solution, the access control service combined with the Windows Identity Foundation framework provides out-of-the-box support for several social media platforms including Windows LiveID, Google, Yahoo!, Facebook. It also has a provider model to allow integration with other platforms as well. Caching (optional) – for sites with high traffic with lots of read-only data and lists, the distributed in-memory caching service can be used to cache and serve up static data at higher scale and speed than direct database requests. It can also be used to manage user session state. Blob Storage (optional) – for sites that serve up unstructured data such as documents, video, audio, device drivers, and more. The data is highly available and stored redundantly across data centers. Each entry in blob storage is provided with it’s own unique URL for direct access by the browser. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • TortoiseSVN hangs in Windows Server 2012 Azure VM

    - by ZaijiaN
    Following @shanselman's article on remoting into an Azure VM for development, I spun up my own VS 2013 VM, and that image runs on WS 2012. Once I was able to remote in, I started installing all my dev tools, including Tortoise SVN 1.8.3 64bit. Things went south once I started attempting to check out code from my personal svn server. It would hang and freeze often, although sometimes it would work - I was able to partially check out projects, but I would get frequent connection time out errors. My personal svn server (VisualSVN 2.7.2) runs at home on a windows 7 machine, and I have a dyndns url pointing to it. I have also configured my router to passthrough all 443 traffic to the appropriate port on the server. I self-signed a cert and made sure it was imported into the VM cert store under trusted root authorities. I have no problems connecting to my svn server from 4-5 other computers & locations. From the Azure VM, in both IE and Chrome, I can access the repository web browser with no issues. There are no outbound firewall restrictions. I have installed other SVN add-ons for Visual Studio (AnkhSVN, VisualSVN) and attempted to connect with my svn server, with largely the same results - random and persistent connection issues (hangs/timeouts). I spun up a completely fresh WS 2008 Azure VM, and installed TortoiseSVN, and had the same results. So I'm at a loss as to what the problem is and how to fix it. Web searches on tortoisesvn and windows server issues doesn't yield any current or relevant information. At this point, i'm guessing that maybe some setting or configuration that MS Azure VM images is the culprit - although I should probably attempt to spin up my own local WS VM to rule out that it's a window server issue. Any thoughts? I hope I'm just missing something really obvious!

    Read the article

  • accessing a blob ; without using a webrole ?

    - by Egon
    I wanted to knw if there is way we can upload /download a blob; add remove view metadata without using a webrole ? If my application has a lot of gui, shud there be multiple webroles ? everywhere I see webrole's file default.aspx.cs has everything to do with the blob based on a event ; which is perfectly fine, but what if my gui is more complicated ?

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • how to enable remote access to a MySQL server on an AZURE virtual machine

    - by Rees
    I have an AZURE virtual machine with a MySQL server installed on it running ubuntu 13.04. I am trying to remote connect to the MySQL server however get the simple error "Can't connect to MySQL server on {IP}" I have already done the follow: * commented out the bind-address within the /etc/mysql/my.cnf * commented out skip-external-locking within the same my.cnf * "ufw allow mysql" * "iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT" * setup an AZURE endpoint for mysql * "sudo netstat -lpn | grep 3306" does indeed show mysql LISTENING * "GRANT ALL ON *.* TO remote@'%' IDENTIFIED BY 'password'; * "GRANT ALL ON *.* TO remote@'localhost' IDENTIFIED BY 'password'; * "/etc/init.d/mysql restart" * I can connect via SSH tunneling, but not without it * I have spun up an identical ubuntu 13.04 server on rackspace and SUCCESSFULLY connected using the same procedures outlined here. NONE of the above works on my azure server however. I thought the creation of an endpoint would work, but no luck. Any help please? Is there something I'm missing entirely?

    Read the article

  • Is it possible to add an existing Azure VM to an Azure Virtual Network?

    - by Dan Harris
    Didn't think this was directly related to programming, so thought Superuser would be better than Stack Overflow.... Is it possible to add an existing Azure VM to an Azure Virtual Network if you didn't add it to the virtual network at the time of creation? I can't see an option to change which Virtual Network the VM is connected to. Do you just have to do it at the time you create the VM, and if you don't do it then you will need to re-create the VM and delete the existing one? Example of the scenario: No VM's or Virtual Networks exist I create a VM (VM1), there is no virtual network so it isn't added to one Later I create a Virtual Network in Azure (Network1) It is possible to create another VM (VM2) and connect it to the Virtual Network (Network1), but can I connect VM1 to Network1 or must I delete VM1 and re-create it to get it connected to Network1?

    Read the article

  • WebsitePanel 2 totally NOT working on Windows Server 2012 on Azure

    - by Carmine Giangregorio
    I’m having many troubles installing WebSitePanel on an Azure Virtual Machine, with Windows Server 2012. I followed the steps in http://www.websitepanel.net/documentation/deployment-guide/server-configuration/preparing-windows-server-2008-r2-for-websitepanel-installation/ and installed everything I needed. Then, I installed the WebSitePanel Standalone Server package with the installer. I opened the endpoint for the port 9002 on Windows Azure; so I pointed my browser to myhostname.cloudapp.net (note: in Azure you don’t have a static IP address, instead you have an hostname like [hostname].cloudapp.net). So, loading myhostname.cloudapp.net:9002 fails, and any browser shows something like “Unable to load page”. Notice: if I try to load the WebSitePanel Portal directly on the server, I get an error HTTP 400 Bad Request. How come? IIS works perfectly on the server, in fact the default website runs without problems on port 80.

    Read the article

  • Can you have a staging and production slot in Azure Websites

    - by Barry King
    I'm looking at hosting 3 Websites (there will all use the same linked database resource but I think I have to use 3 websites within Azure for this); www.website.com, provider.website.com and admin.website.com. Using Windows Azure Websites, can you have a Staging, Production slot? I think this feature is only available to Azure Cloud Services but there is little documentation on this. If its not possible, other than spinning up 3 more sites to act as the staging sites is there another way? I want the ability to "swap" from staging to production.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • What makes a project suitable for Azure/the cloud?

    - by dotnetdev
    Hi, I have read about Windows Azure but to get deeper into this technology, I need to (obviously) use it. I have a small ASP.NET site which gets little traffic and I am thinking that hosting this on Azure would save me money. Other than this, what other factors would contribute to a project being suitable for the cloud? Thanks

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Sd card bigger than 2gb is not recognized in ubuntu 12.04

    - by dex1
    When I insert a card up to 2gb it is immediately seen by the system but if try it with bigger one it's not seen. I presume the issue is not due to the card reader itself as it reads all cards under windows 7 but due to linux driver. I could see some people having similar issues but no solution. Any help appreciated. GParted doesnt see cards bigger than 2gb. After insertion small card ubuntu@ubuntu:~$ dmesg [10169.384481] mmc0: new SD card at address a95c [10169.384870] mmcblk0: mmc0:a95c SD016 14.0 MiB [10169.386715] mmcblk0: p1 everything worked fine then I removed the small one and put 8gb, waited for 2min [10295.736422] mmc0: card a95c removed [10362.448383] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10372.480076] mmc0: Timeout waiting for hardware interrupt. [10382.496146] mmc0: Timeout waiting for hardware interrupt. [10392.512149] mmc0: Timeout waiting for hardware interrupt. [10402.528145] mmc0: Timeout waiting for hardware interrupt. [10402.529267] mmc0: error -110 whilst initialising SD card [10402.748807] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10412.768063] mmc0: Timeout waiting for hardware interrupt. [10422.784051] mmc0: Timeout waiting for hardware interrupt. [10432.800076] mmc0: Timeout waiting for hardware interrupt. [10442.816067] mmc0: Timeout waiting for hardware interrupt. [10442.817165] mmc0: error -110 whilst initialising SD card [10443.040805] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10453.056145] mmc0: Timeout waiting for hardware interrupt. [10463.072139] mmc0: Timeout waiting for hardware interrupt. [10473.088050] mmc0: Timeout waiting for hardware interrupt. [10483.104046] mmc0: Timeout waiting for hardware interrupt. [10483.104107] mmc0: error -110 whilst initialising SD card [10483.328960] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10493.344144] mmc0: Timeout waiting for hardware interrupt. ubuntu@ubuntu:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 03) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (secondary) (rev 03) 00:1a.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3) 00:1f.0 ISA bridge: Intel Corporation 82801HM (ICH8M) LPC Interface Controller (rev 03) 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03) 07:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8072 PCI-E Gigabit Ethernet Controller (rev 16) 0a:01.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 0a:01.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 02) 0a:01.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) Same cards, same machine (same reader) only different OS(win7) work flawlessly. Some interesting reading I came across but is Chinese for me http://www.mail-archive.com/[email protected]/msg14598.html and another bit http://article.gmane.org/gmane.linux.kernel.mmc/11973/match=sd+card+not+recognized

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Sizing Switches for Storage and Production

    - by Untalented
    Couple questions. Should you always completely separate the storage network switches from production switches or are VLANs fine to segment this traffic? Is there a golden rule here? How do you properly size a switch for your environment based on the specifications the manufacturer provide (Throughput, Forwarding Throughput, Stacking Throughput, Max Mac)? If you have two switch options and one has a maximum Mac address of 8,000 vs. another with 16,0000. What does this really mean to me? How do make sure one vs. another is sized properly for me? Besides VLAN and Jumbo Frame support, is there any other "Must" haves for a virtual environments production or storage networks? There is a wealth of knowledge on sizing SANs and such, but this seems equally important and it's quite challenging to find as much information. -- Just to add some tidbits of information for the environment. This setup above is referring to the data centers which supports two different locations which have about 100 users between the two in total. The storage traffic will be iSCSI and will be 3 ESXi Hosts and one SAN housing about 2.7TB of data. Since there is currently no storage network in place (no SAN), I'm having a hard time regarding #2 to really determine what backplane throughput and switch specifications will be sufficient.

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >