Search Results

Search found 14028 results on 562 pages for 'online storage'.

Page 233/562 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Tom Kyte szeminárium Budapesten, 2010. ápr. 19-20.

    - by Fekete Zoltán
    Még lehet jelentkezni Tom Kyte 2010-es budapesti szemináriumára, az itt található linkeken keresztül: - 2010. április 19-20., Budapesten tantermi szeminárium keretében. Témák: The top 10 - no 11 - new features of Oracle; Database 11g; All about binding; Materialized Views, Caching; Effective Indexing; Storage Techniques; Reorganizing objects - when and how. ASK TOM!

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • ISO Files to USB &ndash; The Cheap and Easy Way

    - by RonGarlit
    (DISCLAIMER: Yes there are lots of more elegant ISO software beside the free Microsoft one I’m about to show. But free is free and it has been tested and works for me for making advance bootable USB drives. That is another story. Look up Windows 8 Developer Preview for that one on BING.) For those of use that work with new technology all the time we accumulate a lot of ISO files and have to burn them to CD/DVD’s quite often. But we now have machines without burner in the corporate environment. We have personally Netbooks and light wait highly mobile laptops that do not have DVD burner. USB ports are all the rage and now we have USB 3.0 which is way faster than the 2.0 we are used to. Just looking at the technology, space saving and the cost issues alone is a reason to buy these answer to the DVD’s. So what is special about USB 2.0 and USB 3.0? USB 2 has a maximum speed of 480 Mbps... (That is Megabits per SECOND!!) Now look at the storage that we have with USB thumb drives that are now up to 64 GB in size, cell phone and PDAs that have a lots of internal storage built in well above the 16 Gig range. At the MAX USB 2.0 speed of 480 Mbps a full transfer of data in between devices can take a long time. Time is money right. Every back up a iPhone? Don’t get me started. So at least the engineers have been planning ahead with USB 3.0 which offers a maximum transfer speed of 4.8 Gbps... (That is Giga bits per SECOND!!) That speed is almost 10 times faster than USB 2.0 …. We don’t need to do the math on that one do we? But for now I'm thrilled with USB 2.0 and the fact I can get these little 4 Gig USB drives for $4.00 each at Staples on sale. Well that is a no brainer don’t you think. But what can you do with them to replace that DVD. Simply and cheaply put………. THIS! First let’s get an ISO file like the Visual Studio 2010 Ultimate DVD ISO from MSDN to demonstrate with. I develop on several computers so this is a good choice for me. So we downloaded the ISO file and put it in a folder somewhere like this. Next we go download to the Windows 7 USB/DVD Download Tool site and read about the tool. http://www.microsoftstore.com/store/msstore/html/pbPage.Help_Win7_usbdvd_dwnTool And click this like to get the tool and install it. Once it is installed you go to the Start, Programs menu, Windows 7 USB DVD Download Tool folder. And then click the tool to open it up. As you will see it is a sweet, simple tool that was originally designed to put the ISO for Windows 7 which is designed to be bootable on a USB or DVD for us geeks to play with. It is now being used for the Windows 8 Developer Preview by many developers for that for the same purpose it was built for in the past. But for now we will use it to put a NON Bootable ISO on a USB. Hey it does the job and I’m reusing a left over program. Why buy the fancy one or a free trial and clutter up my machine. We will click the BROWSE button and navigate to where we put our ISO file we want to put on the USB drive. Obviously we are going to click NEXT and continue to select a USB Device (you can guess what the DVD button is for). Next we select the USB that we have plugged into one of our laptops USB ports. Then we click the BEGIN COPYING button and the first thing the program does is format our USB drive. Then it starts copying out files out of the ISO and constructing the USB as if it was a DVD. So now that the files are copying to the drive I’m going to warn you. We will error out here. This program was design for bootable ISO’s of which this one is NOT. No problem because what fails it the writing of the bootable data to the drive that isn’t there. No biggie…. Forget the STARTOVER button is even there and click the dialog’s CLOSE button and exit the program. Now go to Windows Explorer and navigate to the USB Device. You can now access everything and even add stuff to the drive. But for me I want to keep this drive for one purpose and that is to install VS2010 on various machines. So the only stuff I’ll add to this is a folder of notes on things on visual studio that I might want to put on other machines I’m installing VS2010 on to. So that is it. Have a nice day! The Ron

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • IOUG and Oracle Enterprise Manager User Community Twitter Chat and Sessions at OpenWorld

    - by Anand Akela
    Like last many years, we will have annual Oracle Users Forum on Sunday, September 30th, 2012 at Moscone West, Levels 2 & 3 . It will be open to all registered attendees of Oracle Open World and conferences running from September 29 to October 5, 2012 . This will be a great  opportunity to meet with colleagues, peers, and subject matter experts to share best practices, tips, and techniques around Oracle technologies. You could sit in on a special interest group (SIG) meeting or session and learn how to get more out of Oracle technologies and applications. IOUG and Oracle Enterprise Manager team invites you to join a Twitter Chat on Sunday, Sep. 30th from 11:30 AM to 12:30 PM.  IOUG leaders, Enterprise Manager SIG contributors and many Oracle Users Forum speakers will answer questions related to their experience with Oracle Enterprise Manager and the activities and resources available for  Enterprise Manager SIG members. You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c      (Needs Twitter credential for participating).  Feel free to join IOUG and Enterprise team members at the User Group Pavilion on 2nd Floor, Moscone West. Here is the complete list of Oracle Enterprise Manager sessions during the Oracle Users Forum : Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Time Session Title Speakers Location 8:00AM - 8:45AM UGF4569 - Oracle RAC Migration with Oracle Automatic Storage Management and Oracle Enterprise Manager 12c VINOD Emmanuel -Database Engineering, Dell, Inc. Wendy Chen - Sr. Systems Engineer, Dell, Inc. Moscone West - 2011 8:00AM - 8:45AM UGF10389 -  Monitoring Storage Systems for Oracle Enterprise Manager 12c Anand Ranganathan - Product Manager, NetApp Moscone West - 2016 9:00AM - 10:00AM UGF2571 - Make Oracle Enterprise Manager Sing and Dance with the Command-Line Interface Ray Smith - Senior Database Administrator, Portland General Electric Moscone West - 2011 10:30AM - 11:30AM UGF2850 - Optimal Support: Oracle Enterprise Manager 12c Cloud Control, My Oracle Support, and More April Sims - DBA, Southern Utah University Moscone West - 2011 11:30AM - 12:30PM IOUG and Oracle Enterprise Manager Joint Tweet Chat  Join IOUG Leaders, IOUG's Enterprise Manager SIG Contributors and Speakers on Twitter and ask questions related to practitioner's experience with Oracle Enterprise Manager and the new IOUG 's Enterprise Manager SIG. To attend and participate in the chat, please use hash tag #em12c on twitter.com or your favorite Twitter client. You can also go to tweetchat.com/room/em12c to watch the conversation or login with your twitter credentials to ask questions. User Group Pavilion 2nd Floor, Moscone West 12:30PM-2:00PM UGF5131 - Migrating from Oracle Enterprise Manager 10g Grid Control to 12c Cloud Control    Leighton Nelson - Database Administrator, Mercy Moscone West - 2011 2:15PM-3:15PM UGF6511 -  Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control Mike Ault - Oracle Guru, TEXAS MEMORY SYSTEMS INC Tariq Farooq - CEO/Founder, BrainSurface Moscone West - 2011 3:30PM-4:30PM UGF4556 - Will It Blend? Verifying Capacity in Server and Database Consolidations Jeremiah Wilton - Database Technology, Blue Gecko / DatAvail Moscone West - 2018 3:30PM-4:30PM UGF10400 - Oracle Enterprise Manager 12c: Monitoring, Metric Extensions, and Configuration Best Practices Kellyn Pot'Vin - Sr. Technical Consultant, Enkitec Moscone West - 2011 Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • Have Fun with Mozilla Firefox’s Spark and Ember Paper Toys [Geek Project]

    - by Asian Angel
    Are you looking for a fun paper-craft project to work on while showing your support for Mozilla Firefox? Then you will enjoy the Spark and Ember paper toys that the folks from the Mozilla Indonesia community have put together. Each comes in an easy to print three page PDF file for easy storage and sharing with friends. Photo courtesy of Mozilla Blog Indonesia. Download the Spark Paper Toy Download the Ember Paper Toy [via Mozilla Blog Indonesia] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • World Record Oracle Business Intelligence Benchmark on SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered the first and best performance of 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10. A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON. The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec. The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server. Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time. A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory. The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load. The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance. Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI. Performance Landscape Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark. Oracle BI EE 11g Benchmark System Number of Users Response Time (sec) 1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36 Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark. Oracle BI EE 10g Benchmark System Number of Users 2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000 1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000 Configuration Summary Hardware Configuration: SPARC T4-4 server 4 x SPARC T4-4 processors, 3.0 GHz 128 GB memory 4 x 300 GB internal SSD Storage Configuration: "> Sun ZFS Storage 7120 16 x 146 GB disks Software Configuration: Oracle Solaris 10 8/11 Oracle Solaris Studio 12.1 Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0) Oracle WebLogic Server 10.3.5 Oracle Internet Directory 11.1.1.6.0 Oracle Database 11g Release 2 Benchmark Description Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more. The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard. The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario. See Also SPARC T4-4 Server oracle.com OTN Oracle Business Intelligence oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Oracle Solaris oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • ASP.NET Podcast Show #144 - Windows Azure Part II - Worker Roles

    - by Wallym
    Original Url: http://aspnetpodcast.com/CS11/blogs/asp.net_podcast/archive/2010/10/28/asp-net-podcast-show-144-windows-azure-part-ii-worker-roles.aspx This show is on Web & Worker Roles in Azure, Blob Storage, and the Visual Studio 2010 Azure tools. Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download MOV. Download M4V for iPhone/iPad. Download MP3.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • The countdown for ‘In Touch’ has begun!

    - by Julien Haye
    The Oracle 'In Touch' PartnerCast is just a week away from going live, so if you haven’t registered yet, what are you waiting for?! Registration is quick and easy, so click here to register and ensure you stay informed with the latest from the Oracle PartnerNetwork.  'In Touch' relies on your input, so let David Callaghan, Senior Vice President EMEA Alliances and Channels, know your thoughts and comments via the player consol, by emailing [email protected] or on twitter using the hashtag #DCpickme. The cast will go live on Tuesday 29th October from 10:30am UK / 11:30am CET with studio guests Will O'Brien, VP Alliances & Channels, UK & Ireland, and Markus Reischl, Senior Director and Sales Leader EMEA Strategic Alliances, answering your questions on Oracle Storage and Business Intelligence. To find out more information about the cast, including the full line up, please visit the 'In Touch' webpage here.

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SVN repository host for pet projects.

    - by cbrandolino
    Hi! I need a subversion repository for a couple of projects I'm working on with friends/colleagues, in particular one that: Is cheap; Has an high data storage/transfer limit; Does not have unlimited users (they would be ~10); Does not have public anonymous co capabilities (I don't mind them, but usually they have an influence on the cost). Why don't you just install an SVN server on a machine of yours? Because the remote ones host stuff that is vital to clients, while those we have at home are a mess already. Distributed versioning systems? They're cool and everything, but everybody (in the subset "people-I-would-work-with" knows how to use subversion - and really, the easier the better.

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • Egy DBA napja - Utópia, vagy sem?

    - by lsarecz
    Ma délelott a HOUG BI/DW és DB szakmai napján tartottam egy eloadást arról, hogyan kellene egy DBA-nak dolgoznia manapság. A lényeg az volt, hogy az üzlet legjobb kiszolgálása érdekében célszeru a felhasználók és az alkalmazás oldaláról megközelíteni a kérdést. Azaz monitorozzuk a felhasználók elégedettségét, majd ha valami nem stimmel fúrjunk le alsóbb rétegekbe (MW, DB, OS, HW, Storage, Network), és ott folytassuk a diagnosztikát. Meglepetésemre az eloadást követoen a DB szakosztály vezetoje úgy kommentálta az eloadást, hogy ez még utópisztikus. Ma délután egy a hallgatók között ülo üzemeltetési igazgató e-mail-ben jelezte, hogy ezzel o sem ért egyet, mert ok már most is így üzemeltetnek, még ha nem is minden elemében a Grid Control-t használják. És mi sem bizonyítja jobban, hogy ez az üzemeltetési model nem utópia, mint hogy az Oracle Enterprise Manager 11g tökéletes támogatást ad minderre. Akit ezek után érdekel az eloadásom, az innen letöltheti: 1. fele, 2. fele.

    Read the article

  • Youtube videos suddenly all report "Invalid parameters" and won't play

    - by Pointy
    Running 11.10, with up-to-date Flash (11.1.102.55, which the Flash site agrees is the up-to-date version). Every Youtube video reports a blank screen with the words, "Invalid parameters": I've tried fiddling with the player "hardware acceleration" setting, and that has no effect. Similarly, I've clicked and un-clicked and re-clicked the "storage" options in the global settings thing, also to no effect. I haven't seen this in any google searches (though there are other problems described involved the "Invalid parameters" message) which makes me think it's a recent phenomenon. edit — hmm, fails in Firefox but works in Chrome ...

    Read the article

  • O' Reilly Deal of the Day 26/Jun/2012 - Developer's Guide to Collections in Microsoft® .NET

    - by TATWORTH
    Today's 50% off Deal of the Day at http://shop.oreilly.com/product/0790145317193.do?code=MSDEAL is Developer's Guide to Collections in Microsoft® .NET "Put .NET collections to work—and manage issues with GUI data binding, threading, data querying, and storage. Led by a data collection expert, you'll gain task-oriented guidance, exercises, and extensive code samples to tackle common problems and improve application performance. This one-stop reference is designed for experienced Microsoft Visual Basic® and C# developers—whether you’re already using collections or just starting out." I am reviewing this book. Here are my initial comments:The code is well illustrated by diagrams. The approach is practical. The code is well commented, however the C# code samples would be better had they been fully Style Cop compliant.I recommend this book to all C# and VB.NET Development teams. I concur with the author who states that the book is not for learning C# or VB.NET, however it is an excellent book for C# or VB.NET developers to extend their knowledge of the Dot Net framework.

    Read the article

  • MSDN Simulcast Event: Take Your Applications Sky-High with Cloud Computing and the Windows Azure Pla

    Join your local MSDN Events team as we take a deep dive into Microsoft Windows Azure. We'll start with a developer-focused overview of this brave new platform and the cloud computing services that can be used to build amazing applications. As the day unfolds, we'll explore data storage, Microsoft SQL Azure, and the basics of deployment with Windows Azure....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Megjelent a MySQL 5.5

    - by Lajos Sárecz
    Rekord ido alatt készült el az új MySQL 5.5 verziót, melyet a mai nap jelentett be az Oracle. Ez újabb bizonyítéka annak, hogy az Oracle komolyan fejleszti a MySQL-t is, és igyekszik innovatív megoldásokkal megörvendeztetni a MySQL felhasználókat is. Akinek 'Déja-vu' érzése van, az nem véletlen, hiszen a szeptemberi OpenWorld konferencián került bejelentésre a MySQL 5.5 RC, azaz a Release Candidate, melyrol beszámolt például a hwsw.hu is. Az új verzióban elsosorban a teljesítményen és a skálázhatóságon fejlesztett az Oracle. Így például alapértelmezetten az InnoDB storage engine jön a MySQL-el, aminek köszönhetoen például ACID (atomicity, consistency, isolation, durability) tranzakciókat hajt végre az adatbázis-kezelo (ez mondjuk nem egy apró részlet...). Emellett újdonságot jelent még a majdnem szinkron replikáció, a fejlettebb index és tábla particionálás, valamint diagnosztika terén bevezetésre került egy új PERFORMANCE_SCHEMA, aminek köszönhetoen javult a MySQL menedzselhetosége. A RC verzióval futtatott tesztek jelentos gyorsulást mutattak a MySQL 5.1-es verziójához képest, így érdemes megfontolni a verzió frissítést.

    Read the article

  • Local Events | Azure Bootcamp

    - by Jeff Julian
    Coming to Kansas City April 8th and 9th is the Microsoft Azure Bootcamp. This event looks very promising for those developers who are looking into Azure for themselves or their companies. It covers the wide range of topics required to understand what Azure really is and is not. Space is limited so if you are considering Azure, register for this event today.Agenda:Module 1: Introduction to cloud computer and AzureHow it worksKey ScenariosThe development environment and SDKModule 2: Using Web RolesBasic ASP.NETBasic configurationModule 3: Blobs: File Storage in the cloudModule 4: Tables: Scalable hierarchical storageModule 5: Queues: Decoupling your systemsModule 6: Basic Worker RolesExecuting backend processesConsuming a queueLeveraging local storageModule 7: Advanced Worker RolesExternal EndpointsInter-role communicationModule 8: Building a business with AzureUsing Azure as an ISV or a partnerAdvantages to delivering valueBPOSPricingModule 9: SQL AzureSetting it upSQL Azure firewallRemote managementMigrating dataModule 10: AppFabricService BusAccess Control SystemIdentity in the cloudModule 11: Cloud ScenariosApp migration strategiesDisposable computingDynamic scaleShuntingPrototypingMultitenant applications (This is my second attempt at this post after MacJournal decided to crash and not save my work. Authoring tools all need auto-save features by now, that is a requirement set in stone by Microsoft Word 97) Related Tags: Azure, Microsoft, Kansas City

    Read the article

  • Windows Azure Platform Training Kit - June Update

    Microsoft released an update to its Azure training kit. Here is what is new in the kit: Introduction to Windows Azure - VS2010 version Introduction To SQL Azure - VS2010 version Introduction to the Windows Azure Platform AppFabric Service Bus - VS2010 version Introduction to Dallas - VS2010 version Introduction to the Windows Azure Platform AppFabric Access Control Service - VS2010 version Web Services and Identity in the Cloud Exploring Windows Azure Storage VS2010...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

  • Date and Time Support in SQL Server 2008

    - by Aamir Hasan
      Using the New Date and Time Data Types Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 1.       The new date and time data types in SQL Server 2008 offer increased range and precision and are ANSI SQL compatible. 2.       Separate date and time data types minimize storage space requirements for applications that need only date or time information. Moreover, the variable precision of the new time data type increases storage savings in exchange for reduced accuracy. 3.       The new data types are mostly compatible with the original date and time data types and use the same Transact-SQL functions. 4.       The datetimeoffset data type allows you to handle date and time information in global applications that use data that originates from different time zones. SELECT c.name, p.* FROM politics pJOIN country cON p.country = c.codeWHERE YEAR(Independence) < 1753ORDER BY IndependenceGO8.    Highlight the SELECT statement and click Execute ( ) to show the use of some of the date functions.T-SQLSELECT c.name AS [Country Name],        CONVERT(VARCHAR(12), p.Independence, 107) AS [Independence Date],       DATEDIFF(YEAR, p.Independence, GETDATE()) AS [Years Independent (appox)],       p.GovernmentFROM politics pJOIN country cON p.country = c.codeWHERE YEAR(Independence) < 1753ORDER BY IndependenceGO10.    Select the SET DATEFORMAT statement and click Execute ( ) to change the DATEFORMAT to day-month-year.T-SQLSET DATEFORMAT dmyGO11.    Select the DECLARE and SELECT statements and click Execute ( ) to show how the datetime and datetime2 data types interpret a date literal.T-SQLSET DATEFORMAT dmyDECLARE @dt datetime = '2008-12-05'DECLARE @dt2 datetime2 = '2008-12-05'SELECT MONTH(@dt) AS [Month-Datetime], DAY(@dt)     AS [Day-Datetime]SELECT MONTH(@dt2) AS [Month-Datetime2], DAY(@dt2)     AS [Day-Datetime2]GO12.    Highlight the DECLARE and SELECT statements and click Execute ( ) to use integer arithmetic on a datetime variable.T-SQLDECLARE @dt datetime = '2008-12-05'SELECT @dt + 1GO13.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how integer arithmetic is not allowed for datetime2 variables.T-SQLDECLARE @dt2 datetime = '2008-12-05'SELECT @dt2 + 1GO14.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how to use DATE functions to do simple arithmetic on datetime2 variables.T-SQLDECLARE @dt2 datetime2(7) = '2008-12-05'SELECT DATEADD(d, 1, @dt2)GO15.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how the GETDATE function can be used with both datetime and datetime2 data types.T-SQLDECLARE @dt datetime = GETDATE();DECLARE @dt2 datetime2(7) = GETDATE();SELECT @dt AS [GetDate-DateTime], @dt2 AS [GetDate-DateTime2]GO16.    Draw attention to the values returned for both columns and how they are equal.17.    Highlight the DECLARE and SELECT statements and click Execute ( ) to show how the SYSDATETIME function can be used with both datetime and datetime2 data types.T-SQLDECLARE @dt datetime = SYSDATETIME();DECLARE @dt2 datetime2(7) = SYSDATETIME();SELECT @dt AS [Sysdatetime-DateTime], @dt2     AS [Sysdatetime-DateTime2]GO18.    Draw attention to the values returned for both columns and how they are different.Programming Global Applications with DateTimeOffset 2.    If you have not previously created the SQLTrainingKitDB database while completing another demo in this training kit, highlight the CREATE DATABASE statement and click Execute ( ) to do so now.T-SQLCREATE DATABASE SQLTrainingKitDBGO3.    Select the USE and CREATE TABLE statements and click Execute ( ) to create table datetest in the SQLTrainingKitDB database.T-SQLUSE SQLTrainingKitDBGOCREATE TABLE datetest (  id integer IDENTITY PRIMARY KEY,  datetimecol datetimeoffset,  EnteredTZ varchar(40)); Reference:http://www.microsoft.com/downloads/details.aspx?FamilyID=E9C68E1B-1E0E-4299-B498-6AB3CA72A6D7&displaylang=en   

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >