Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 100/271 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • New Hands-On Labs For Oracle VM

    - by rickramsey
    I just spent some time walking through the labs that Christophe Pauliat and Olivier Canonge prepared to help you become familiar with Oracle VM. They are terrific. We will offer them for the first time at Oracle Open World. Because they require some pre-work and 16Gigs of memory, we are supplying the laptops for the participants. Lab 1: Deploying Infrastructure as a Service with Oracle VM Session ID: HOL9558 Tuesday October 2nd, 2012 10:15am – 11:15am Marriott Marquis - Salon 14/15 Planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. Storage capacity planning, LUN creation, network bandwidth planning, and best practices for designing and streamlining the environment so that it's easy to manage. Lab 2: Virtualize and Deploy Oracle Applications Using Oracle VM Templates Session ID: HOL9559 Tuesday October 2nd, 2012 11:45am – 12:45pm Marriott Marquis - Salon 14/15 How to deploy Oracle applications in minutes with Oracle VM Templates. Step-by-step lab proctored by field-experienced engineers and product experts. Covers: Find out what Oracle VM Templates are and how they work Deploy an actual Oracle VM Template for an Oracle Application Plan your deployment to streamline on going updates and upgrades Lab 3: x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL 9870 Wednesday, 3 Oct, 2012 5:00 PM - 6:00 PM Marriott Marquis - Salon 14/15 This hands-on lab will demonstrate what Oracle’s enterprise cloud infrastructure for x86 can do, and how it works with Oracle VM 3.x. It covers: How to create VMs How to migrate VMs How to deploy Oracle applications quickly and easily with Oracle VM Templates How to use the Storage Connect plug-in for the Sun ZFS Storage Appliance Additional Virtualization Resources for Sysadmins Technical articles about virtualization Other resources about Oracle virtualization technologies More information about Oracle Open World. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Key announcements from Oracle Openworld - Video series

    - by Javier Puerta
    If you missed Oracle Openworld now you have the opportunity to watch a series of four 15-min webcasts with the key announcements, explained by EMEA key executives. Oracle OpenWorld I, OMN - Part 1 OPENWORLD I: Oracle's Cloud. interview with Alan HartwellGaye Hudson and Steve Walker, EMEA Corporate Communications take a look at Oracle's announcements leading up to Oracle Open World and talk to Alan Hartwell, VP Sales, Engineered Solutions, Exadata, Exalogic about Oracle's cloud offering. Oracle Open World II , OMN Part 2 OPENWORLD II: Engineered Systems with Alan HartwellGaye Hudson, VP Corporate Communications, EMEA talks to Alan Hartwell, VP Sales, Engineered Solutions, Exadata, Exalogic about Oracle's Engineered Systems, parallel hardware and software; Exalytics, Big Data Appliance & Enterprise Manager. Oracle OpenWorld III, OMN Part 3 OPENWORLD III: HW with John Abel, Storage with Luc Gheysens Gaye Hudson and Steve Walker talk to John Abel, Chief Technology Architect, Oracle Server and Storage, EMEA about SPARC SuperCluster and T4; and to Luc Gheysens, Senior Director, Storage Sales Specialist, EMEA about ZFS Storage and Pillar Axiom 600. Oracle OpenWorld IV, OMN Part 4 OPENWORLD IV: Oracle Fusion Applications with Noel ColoeGaye Hudson, VP Corporate Communications, EMEA talks to Noel Coloe, Head of Western Europe Applications Sales Development about Oracle Fusion Applications, a new paradigm in Enterprise applications.

    Read the article

  • What are Information Centers?

    - by user12244613
    Information Centers are similar to product pages in the Oracle Sun System Handbook Many customers like the Oracle Sun System Handbook concept of a home page with all the product attributes, troubleshooting etc. access from a single home page. This concept is now available for a range of Oracle Solaris, Systems, and Storage products. The Information Center for each product covers areas such as: Overview, Hot Topics, Patching and Maintenance. The Information Center pages are dynamically generated each night to ensure the latest content is available to you. Here are the top Solaris, Systems, and Storage Information Centers: Oracle Explorer Data Collector Oracle Solaris 10 Live Upgrade Oracle Solaris 11 Booting Information Center Oracle Solaris 11 Desktop and Graphics Information Center Oracle Solaris 11 Image Packaging System (IPS) Information Center Oracle Solaris 11 Installation Information Center Oracle Solaris 11 Product Information Center Oracle Solaris 11 Security Information Center Oracle Solaris 11 System Administration Information Center Oracle Solaris 11 Zones Information Center Oracle Solaris Crash Analysis Tool(SCAT) - Information Center Oracle Solaris Cluster Information Center Oracle Solaris Internet Protocol Multipathing (IPMP) Information Center Oracle Solaris Live Upgrade Information Center Oracle Solaris ZFS Information Center Oracle Solaris Zones Information Center CMT T1000/T2000 and Netra T2000 CMT T5120/T5120/T5140/T5220/T5240/T5440 Systems M3000/M4000/M5000/M8000/M9000-32/M9000-64 Management and Diagnostic Tools for Oracle Sun Systems Netra CT410/810 and Netra CT900 Network-Attached Storage (NAS) Oracle Explorer Data Collector Oracle VM Server for SPARC (LDoms) Pillar Axiom 600 SL3000 Tape Library Sun Disk Storage Patching and Updates Sun Fire 3800/4800/4810/6800/E2900/E4900/E6900/V1280 - Netra 1280/1290 Sun Fire 12K/15K/E20K/E25K Sun Fire X4270 M2 Server Sun x86 Servers T3 and T4 Systems Tape Domain Firmware V210/V240/V440/V215/V245/V445 Servers VSM (VTSS/VLE/VTCS)

    Read the article

  • Product Update Bulletin: Oracle Solaris Cluster October 2013

    - by uwes
    Announcing new qualifications and general news for the Oracle Solaris Cluster product. Hardware Qualifications Sun Server X4-2 and X4-2L servers, Sun Blade X4-2B server module with Oracle Solaris Cluster 3.3 Sun Storage 16 Gb Fibre Channel ExpressModule Universal HBA, Emulex Oracle Dual Port QDR InfiniBand Adapter M3 Software Qualifications Oracle Database 12c Real Application Cluster with Oracle Solaris Cluster 4.1 Oracle Database 11.2.0.4 single instance and RAC with Oracle Solaris Cluster 4.1 Oracle VM server for SPARC 3.1 SAP Netweaver with new kernel versions ZFS Storage Appliance Kit version 2011.1.7.0 and 2013.1.0.0 Application monitoring in Oracle VM for SPARC failover guest domain Storage Partner Update Oracle Solaris Cluster 3.3 3/13 with the HDS Enterprise Storage arrays EMC SRDF for Oracle database 12c RAC in Oracle Solaris Cluster 4.1 geo cluster configuration Oracle Solaris Cluster References Korea Enterprise Data, HDFC Securities, Dealis Fund Operations Web Updates New blog entry: Oracle Solaris 10 Brand Zone cluster Solaris Application Engineering website now includes Oracle Solaris Cluster application support information Please read the Oracle Solaris Cluster Product Update Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) _____________________________________________________________________ For More Information Go To:Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster pageOracle Solaris Cluster mos communityPartner web Oracle Solaris Cluster pageOracle Solaris Cluster Blog Solaris.us.oracle.com page

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How can I make an unmounted / unmountable NTFS disk not show up in the nautilus devices area?

    - by Dennis
    I have an idea that my /etc/fstab is a real mish-mash and I don't remember how it got that way, first of all it looks like this UUID=9EB80807B807DD21 /media/Storage ntfs-3g users 0 0 UUID=a60397fd-964a-45b1-ad35-53c8a4bee010 / ext4 defaults 0 1 UUID=1764825d-b8ba-4620-b3b0-e979b6f4f5c4 swap swap sw 0 0 UUID=255DA1E406E29DBC /media/sda2 ntfs-3g defaults 0 0 UUID=2CCCF161CCF1262C /mnt/sda1 ntfs-3g umask=000 0 0 /dev/fd0 /media/floppy0 vfat noauto 0 0 I started with an old XP install on disk /dev/sda that I don't use anymore but didn't want to delete, so I shrunk the XP partition, added a NTFS partition that would be common to both systems (Labeled it "Common" in XP), then installed Lucid on an extended ext4 partition. On this disk of course the ext4 system partition comes up as /, the go between partition auto-mounts on /media/sda1 but shows up in Nautilus as COMMOM, while the XP system disk does not show up in Nautilus, but I can get to it by navigating to /mnt/sda1. A second hard drive (/dev/sdb) that I stuck in was already formatted NTFS with a bunch of stuff and labeled "Storage". It auto-mounts to /media/Storage but another un-mounted disk also shows up in the Nautilus device area called Storage but it can't be mounted (Here and in the "Places" are the only times it appears) I would primarily like this non-existant (or already mounted depending on how you look at it) disk to not show up, but I wouldn't mind an explanation of why one labeled partition auto-mounts to a /media mount point but shows up by label, one does not show up as mounted at all but mounts to a /mnt mount point and is there for navigation, and one is mounted to a directory of the same name as the label. I would love to have some consistancy / direction on what is proper in this circumstance. No doubt I caused this with the fstab but I really don't remember what my rational was if I edited it manually

    Read the article

  • ?Oracle DB 11gR2 ??????????????????/????????????????!

    - by Yuichi.Hayashi
    ?????????????????????????????40~60%????????????? ??????????????????????????????????????????????????????????????????????????????????... ????????????????????????????????????????TCO(Total Cost of Ownership)???????????? ??????????????????????????????????????????????????????????????????????????? ???????1?1???????????????????????????????????1??????????????????????????????????????·????????????? ??????????????????·???·????????????????TCO????????????? ????????????????Grid(????)????????????????????????????·???? = Oracle Real Application Clusters(RAC)???????·???? = Oracle Automatic Storage Management(ASM)????????????????????????????????????? Oracle Database???????11g R2?????????????????????????/???????????????????????(????????????????)?????????????????????????????! SCAN Single Client Access Name(SCAN)??Oracle Real Application Clusters(RAC)11g R2??????? SCAN??????????????????????????????????????????????????RAC?????????????????????????·????????????????????SCAN?????????????????????VIP?????????????RAC????????????????????????????????!???????????????? ???????????????????????????????????????SCAN?????????????????????????????TCO?????????????????(????????????)???????????????????????????????!????????????? SCAN?????????????????????? ??????Oracle Database 11gR2 Real Application Clusters(?????????) ??????Oracle Real Application Clusters 11g Release 2 SCAN??? ACFS ASM Cluster File System(ACFS)??Automatic Storage Management(ASM)11g R2??????? ASM??S.A.M.E.(Stripe And Mirror Everything)????????????????????????????????????????????????????????·???????·??????????????????10g????????????????·??????????·???????????????ASM????????????·??????????????????????????????????????????·?????????????????????????????????????????????? 11g R2??????ACFS?????????????????(????????????????????????????????????????????????????????)????ASM???????????????????????????????????????????????????????·??????????????ACFS????! · ??????????????????? · ????????????/???? · ??????????????????????(?????)??? · ????????????? ?2???????????????????????????? · ???????????? ??????????????????????·??????????·?????????????????????!???????????????????????????????????????? ACFS??????????????????? ??????Oracle Database 11gR2 Automatic Storage Management ??????Oracle Database 11g Release 2 Automatic Storage Management???????????????? ??????·????? ??????·???????Oracle Database Resource Manager(????·?????)11g R2??????? ????·????????????·???????????????????????????????Oracle Database???Oracle RAC????????????????????????????????????????????????????????????????????????????????????????????????????1????????????? ????????????·?????????????????????????????????????????????????????????????????? CPU ???????????????????????????????????????????????????·??????????????????????????? 11g R2??????????·???????????????? CPU_COUNT ?????????????? CPU ???????????????????????????·??????? CPU ?????????????????????????????????????????????? ????????·???????????????????????????????????????????Oracle ??????????????????????·??????? CPU ??????????????????????????????·???????????????!???????????????? ??????·???????????????????????? ??????Oracle Database 11gR2 ????????????? ?????????? ? ??????????????????????????????????!? ? ???????????????????????????????????!?

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • TechDays 2011 Sweden Videos

    - by Your DisplayName here!
    All the videos from the excellent Örebro event are now online. Dominick Baier: A Technical Introduction to the Windows Identity Foundation (watch) Dominick Baier & Christian Weyer: Securing REST-Services and Web APIs on the Windows Azure Platform (watch) Christian Weyer: Real World Azure - Elasticity from on-premise to the cloud and back (watch) Our interview with Robert (watch)

    Read the article

  • Type of Blobs

    - by kaleidoscope
    With the release of Windows Azure November 2009 CTP, now we have two types of blobs. Block Blob - This blob type is in place since PDC 2008 and is optimized for streaming workloads. [Max Size allowed : 200GB] Page Blob - With November 2009 CTP release, a new blob type is added which is optimized for random read / writes called Page Blob. [Max Size allowed : 1TB] More details can be found at: http://geekswithblogs.net/IUnknown/archive/2009/11/16/azure-november-ctp-announced.aspx Amit, S

    Read the article

  • User Group Meeting Summary - April 2010

    - by Michael Stephenson
    Thanks to everyone who could make it to what turned out to be an excellent SBUG event.  First some thanks to:  Speakers: Anthony Ross and Elton Stoneman Host: The various people at Hitachi who helped to organise and arrange the venue.   Session 1 - Getting up and running with Windows Mobile and the Windows Azure Service Bus In this session Anthony discussed some considerations for using Windows Mobile and the Windows Azure Service Bus from a real-world project which Hitachi have been working on with EasyJet.  Anthony also walked through a simplified demo of the concepts which applied on the project.   In addition to the slides and demo it was also very interesting to discuss with the guys involved on this project to hear about their real experiences developing with the Azure Service Bus and some of the limitations they have had to work around in Windows Mobiles ability to interact with the service bus.   On the back of this session we will look to do some further activities around this topic and the guys offered to share their wish list of features for both Windows Mobile and Windows Azure which we will look to share for user group discussion.   Another interesting point was the cost aspects of using the ISB which were very low.   Session 2 - The Enterprise Cache In the second session Elton used a few slides which are based around one of his customer scenario's where they are looking into the concept of an Enterprise Cache within the organisation.  Elton discusses this concept and also a codeplex project he is putting together which allows you to take advantage of a cache with various providers such as Memcached, AppFabric Caching and Ncache.   Following the presentation it was interesting to hear peoples thoughts on various aspects such as the enterprise cache versus an out of process application cache.  Also there was interesting discussion around how people would like to search the cache in the future.   We will again look to put together some follow-up activity on this   Meeting Summary Following the meeting all slide decks are saved in the skydrive location where we keep content from all meetings: http://cid-40015ea59a1307c8.skydrive.live.com/browse.aspx/.Public/SBUG/SBUG%20Meetings/2010%20April   Remember that the details of all previous events are on the following page. http://uksoabpm.org/Events.aspx   Competition We had three copies of the Windows Identity Foundation Patterns and Practices book that were raffles on the night, it would be great to hear any feedback on the book from those who won it.   Recording The user group meeting was recorded and we will look to make this available online sometime soon.   UG Business The following things were discussed as general UG topics:   We will change the name of the user group to the UK Connected Systems User Group to we are more inline with other user groups who cover similar topics and we believe this will help us to attract more members.  The content or focus of the user group is not expected to change.   The next meeting is 26th May and can be registered at the following link: http://sbugmay2010.eventbrite.com/

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • May 2010 Chicago Architects Group Meeting

    - by Tim Murphy
    The Chicago Architects Group will be holding its next meeting on May 18th.  Please come and join us and get involved in our architect community. Register Presenter: Scott Seely  Topic: Azure For Architects       Location: TechNexus 200 S. Wacker Dr., Suite 1500 Room A/B Chicago, IL 60606 Time: 5:30 - Doors open at 5:00 del.icio.us Tags: Chicago Architects Group,Azure,Scott Seely

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Windows Cloud Services Aren’t Exclusive to Microsoft

    - by Ken Cox [MVP]
    The Windows Azure brand has captured mindshare for the buzzword-du-jour, ‘cloud computing’. However, Microsoft certainly isn’t the only option for cranking up virtual machines to meet unexpected or peak demands. For example, I see that OrcsWeb has released its Windows Cloud Servers product , starting at $99.99 a month*.  Competition is a good thing - and make sure you do some cost comparisons when researching cloud resources. Some of us were unpleasantly surprised by Azure’s pricing structure...(read more)

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • REPLACE Multiple Spaces with One

    Replacing multiple spaces with a single space is an old problem that people use loops, functions, and/or Tally tables for. Here's a set based method from MVP Jeff Moden. “Thanks for building such a useful and simple-to-use service”- Steve Harshbarger, CTO, 10th Magnitude. Get started with Red Gate Cloud Services and back up your SQL Azure databases to Azure Blob storage or Amazon S3 – download a free trial today.

    Read the article

  • Microsoft Patch Tuesday Promises Critical Updates

    Microsoft revealed some plans last week for its upcoming Patch Tuesday release that should keep IT professionals busy. The latest Patch Tuesday falls on June 14 and it will bring with it 16 bulletins from Microsoft focused on fixing 34 vulnerabilities that stretch across several of the company s products.... Microsoft? Windows Azure Host, Scale, and Manage Web Apps In The Cloud. Learn More About Azure.

    Read the article

  • Ruby on Rails: url_for :back leads to NoMethodError for back_url

    - by Platinum Azure
    Hi all, I'm trying to use url_for(:back) to create a redirect leading back to a previous page upon a user's logging in. I've had it working successfully for when the user just goes to the login page on his or her own. However, when the user is redirected to the login page due to accessing a page requiring that the user be authenticated, the redirect sends the user back to the page before the one s/he had tried to access with insufficient permissions. I'm trying to modify my login controller action to deal with the redirect properly. My plan is to have a query string parameter "redirect" that is used when a forced redirect occurs. In the controller, if that parameter exists that URL is used; otherwise, url_for(:back) is used, or if that doesn't work (due to lack of HTTP_REFERER), then the user is redirected to the site's home page. Here is the code snippet which is supposed to implement this logic: if params[:redirect] @url = params[:redirect] else @url = url_for :back @url ||= url_for :controller => "home", :action => "index" end The error I get is: NoMethodError in UsersController#login undefined method `back_url' for # RAILS_ROOT: [obscured] Application Trace | Framework Trace | Full Trace vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `__send__' vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `polymorphic_url' vendor/rails/actionpack/lib/action_controller/base.rb:628:in `url_for' app/controllers/users_controller.rb:16:in `login' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:76:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `synchronize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:282:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/command.rb:212:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `__send__' vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `polymorphic_url' vendor/rails/actionpack/lib/action_controller/base.rb:628:in `url_for' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `perform_action_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:617:in `call_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /usr/lib/ruby/1.8/benchmark.rb:293:in `measure' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' vendor/rails/actionpack/lib/action_controller/rescue.rb:136:in `perform_action_without_caching' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:13:in `perform_action' vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' vendor/rails/activerecord/lib/active_record/query_cache.rb:8:in `cache' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:12:in `perform_action' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `process_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:606:in `process_without_session_management_support' vendor/rails/actionpack/lib/action_controller/session_management.rb:134:in `process' vendor/rails/actionpack/lib/action_controller/base.rb:392:in `process' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:184:in `handle_request' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:112:in `dispatch_unlocked' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:125:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `synchronize' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:134:in `dispatch_cgi' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:41:in `dispatch' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load_without_new_constant_marking' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/railties/lib/commands/servers/mongrel.rb:64 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/railties/lib/commands/server.rb:49 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `__send__' vendor/rails/actionpack/lib/action_controller/polymorphic_routes.rb:112:in `polymorphic_url' vendor/rails/actionpack/lib/action_controller/base.rb:628:in `url_for' app/controllers/users_controller.rb:16:in `login' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:1256:in `perform_action_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:617:in `call_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /usr/lib/ruby/1.8/benchmark.rb:293:in `measure' vendor/rails/actionpack/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' vendor/rails/actionpack/lib/action_controller/rescue.rb:136:in `perform_action_without_caching' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:13:in `perform_action' vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' vendor/rails/activerecord/lib/active_record/query_cache.rb:8:in `cache' vendor/rails/actionpack/lib/action_controller/caching/sql_cache.rb:12:in `perform_action' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `send' vendor/rails/actionpack/lib/action_controller/base.rb:524:in `process_without_filters' vendor/rails/actionpack/lib/action_controller/filters.rb:606:in `process_without_session_management_support' vendor/rails/actionpack/lib/action_controller/session_management.rb:134:in `process' vendor/rails/actionpack/lib/action_controller/base.rb:392:in `process' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:184:in `handle_request' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:112:in `dispatch_unlocked' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:125:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `synchronize' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:124:in `dispatch' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:134:in `dispatch_cgi' vendor/rails/actionpack/lib/action_controller/dispatcher.rb:41:in `dispatch' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:76:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `synchronize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/rails.rb:74:in `process' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:159:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:158:in `process_client' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:285:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `initialize' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `new' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:268:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:282:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `each' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/configurator.rb:281:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/lib/mongrel/command.rb:212:in `run' /var/lib/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load_without_new_constant_marking' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:142:in `load' vendor/rails/railties/lib/commands/servers/mongrel.rb:64 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' vendor/rails/railties/lib/commands/server.rb:49 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 Request Parameters: None Show session dump --- :user: :csrf_id: 2927cca61bbbe97218362b5bcdb74c0f flash: !map:ActionController::Flash::FlashHash {} Response Headers: {"Content-Type"="", "cookie"=[], "Cache-Control"="no-cache"} Bear in mind that I had it working earlier-- url_for(:back) knew how to operate properly before I added this logic. Thanks in advance for any ideas!

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Ruby libraries for parsing .doc files?

    - by Platinum Azure
    Hi all, I was just wondering if anyone knew of any good libraries for parsing .doc files (and similar formats, like .odt) to extract text, yet also keep formatting information where possible for display on a website. Capability of doing similarly for PDFs would be a bonus, but I'm not looking as much for that. This is for a Rails project, if that helps at all. Thanks in advance!

    Read the article

  • SQLAuthority News – Storing Data and Files in Cloud – Dropbox – Personal Technology Tip

    - by pinaldave
    I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Source: Dropbox.com My Dropbox I have finally decided, though, and have determined that the first Personal Technology Tip may not be the most secret or even trickier to master – in fact, it is probably the easiest.  My today’s Personal Technology Tip is Dropbox. I hope that all of you are nodding along in recognition right now.  If you do not use Dropbox, or have not even heard of it before, get on the internet and find their site.  You won’t be disappointed.  A quick recap for those in the dark: Dropbox is an online storage site with a lot of additional syncing and cloud-computing capabilities.  Now that we’ve covered the basics, let’s explore some of my favorite options in Dropbox. Collaborate with All The first thing I love about Dropbox is the ability it gives you to collaborate with others.  You can share files easily with other Dropbox users, and they can alter them, share them with you, all while keeping track of different versions in on easy place.  I’d like to see anyone try to accomplish that key idea – “easily” – using e-mail versions and multiple computers.  It’s even difficult to accomplish using a shared network. Afraid that this kind of ease looks too good to be true?  Afraid that maybe there isn’t enough storage space, or the user interface is confusing?  Think again.  There is plenty of space – you can get 2 GB with just a free account, and upgrades are inexpensive and go up to 100 GB of storage.  And the user interface is so easy that anyone can learn to use it. What I use Dropbox for I love Dropbox because I give a lot of presentations and often they are far from home.  I can keep my presentations on Dropbox and have easy access to them anywhere, without needing to have my whole computer with me.  This is just one small way that you can use Dropbox. You can sync your entire hard drive, or hard drives if you have multiple computers (home, work, office, shared), and you can set Dropbox to automatically sync files on a certain timeline, or whenever Dropbox notices that they’ve been changed. Why I love Dropbox Dropbox has plenty of storage, but 2 GB still has a hard time competing with the average desktop’s storage space.  So what if you want to sync most of your files, but only the ones you use the most and share between work and home, and not all your files (especially large files like pictures and videos)?  You can use selective sync to choose which files to sync. Above all, my favorite feature is LanSync.  Dropbox will search your Local Area Network (LAN) for new files and sync them to Dropbox, as well as downloading the new version to all the shared files across the network.  That means that if move around on different computers at work or at home, you will have the same version of the file every time.  Or, other users on the LAN will have access to the new version, which makes collaboration extremely easy. Ref: rzfeeser.com Dropbox has so many other features that I feel like I could create a Personal Technology Tips series devoted entirely to Dropbox.  I’m going to create a bullet list here to make things shorter, but I strongly encourage you to look further into these into options if it sounds like something you would use. Theft Recover Home Security File Hosting and Sharing Portable Dropbox Sync your iCal calendar Password Storage What is your favorite tool and why? I could go on and on, but I will end here.  In summary – I strongly encourage everyone to investigate Dropbox to see if it’s something they would find useful.  If you use Dropbox and know of a great feature I failed to mention, please share it with me, I’d love to hear how everyone uses this program. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Personal Technology

    Read the article

  • Item 2, Scott Myers Effective C++ question

    - by user619818
    In Item2 on page 16, (Prefer consts, enums, and inlines to #defines), Scott says: 'Also, though good compilers won't set aside storage for const objects of integer types'. I don't understand this. If I define a const object, eg const int myval = 5; then surely the compiler must set aside some memory (of int size) to store the value 5? Or is const data stored in some special way? This is more a question of computer storage I suppose. Basically, how does the computer store const objects so that no storage is set aside?

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >