Search Results

Search found 27852 results on 1115 pages for 'oracle openworld blog team'.

Page 397/1115 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • how to configure word press blog to save uploaded images to a certain dir. ?

    - by kacalapy
    i have a word press blog installed on my server but when i upload a image it goes into one directory, but the blog front end is lookign for the images in another directory... so the images show as broken. i have been manualy copying the uploaded images from server location to the one WP looks for them in and it works. how can i set the WP blog to upload to the same dir on the filesystem that the WP front end is trying to pull from?

    Read the article

  • Oracle: TABLE ACCESS FULL with Primary key?

    - by tim
    There is a table: CREATE TABLE temp ( IDR decimal(9) NOT NULL, IDS decimal(9) NOT NULL, DT date NOT NULL, VAL decimal(10) NOT NULL, AFFID decimal(9), CONSTRAINT PKtemp PRIMARY KEY (IDR,IDS,DT) ) ; SQL>explain plan for select * from temp; Explained. SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 61 | 2 (0)| | 1 | TABLE ACCESS FULL| TEMP | 1 | 61 | 2 (0)| --------------------------------------------------------------- Note ----- - 'PLAN_TABLE' is old version 11 rows selected. SQL server 2008 shows in the same situation Clustered index scan. What is the reason?

    Read the article

  • How to join two query in SQL (Oracle)

    - by MAHESH A SONI
    How can I join these queries? SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CA' AND RCTAMOUNT>0 GROUP BY RCTDT --- SELECT RCTDT, SUM(RCTAMOUNT), COUNT(RCTAMOUNT) FROM RECEIPTS4 WHERE RCTDT BETWEEN '01-nov-2009' AND '30-nov-2009' AND RCTTYPE='CQ' AND RCTAMOUNT>0 GROUP BY RCTDT

    Read the article

  • Query scope within a table trigger in an Oracle database

    - by sisslack
    I'm been trying to write a table trigger the queries another table that is outside the schema where the trigger will reside. Is this possible? It seems like I have no problem querying tables in my schema but I get: Error: ORA-00942: table or view does not exist when trying trying to query tables outside my schema. The documentation seems to elude to this notion, but it's not 100% clear to me.

    Read the article

  • How to add facebook like in multiple blog post like your index file in wordpress?

    - by Kleigh Heart Garcia
    i would like to implement or add a facebook like button in my wordpress blog. But this time, i want it on my index file or in a multiple blog post. The problem is when I copy pasted what facebook is generating for me is it only "likes" my domain name or my index file, not the blog post which is suppose to be liked. What should i do to correctly add the facebook like button to work? thank you very much!

    Read the article

  • Oracle join issue

    - by acadia
    Hello, I have 3 tables and I am joining these 2 tables as follows: SELECT EMP.FNAME,EMP.LNAME,EMP.AGE,EMPD.TQ,EMPD.TA,CTY.CITY_NAME FROM EMPLOYEE EMP,EMPLOYEE_DETAIL EMPD, CITY CTY WHERE EMP.EMP_ID=EMPD.EMP_ID AND EMPD_CITY_ID=CTY.CITY_ID I want to display records even if City record is not in CITY table. For eg. if City_ID record for say 10 is not in City table but there is an employee detail record with City_id 10 it should display City_name as null instead of not displaying the record at all. Appreciate your help

    Read the article

  • Oracle 10g multiple DELETE statements

    - by bmw0128
    I'm building a dml file that first deletes records that may be in the table, then inserts records. Example: DELETE from foo where field1='bar'; DELETE from foo where fields1='bazz'; INSERT ALL INTO foo(field1, field2) values ('bar', 'x') INTO foo(field1, field2) values ('bazz', 'y') SELECT * from DUAL; When I run the insert statement by itself, it runs fine. When I run the deletes, only the last delete runs. Also, it seems to be necessary to end the multiple insert with the select, is that so? If so, why is that necessary? In the past, when using MySQL, I could just list multiple delete and insert statements, all individually ending with a semicolon, and it would run fine.

    Read the article

  • Replace first Letter in a field Oracle

    - by Stanley
    Hi Guys I have this Table I need to replace the First Letter in ACCT_NAME with the First Name of ACCT_SHORT_NAME. Records like the Higliighted(RAFFMAN) should not be changed. I have tried: select acct_name, ACCT_SHORT_NAME,replace(acct_name, substr(acct_name, 1, 1), ACCT_SHORT_NAME) from tbaadm.gam where schm_type = 'TDA' and rcre_user_id = 'SYSTEM' and substr(acct_name,2,1) = ' ' I am getting: This means that am Picking the whole value in ACCT_SHORT_NAME. WHat is the best way to do what am trying to do?

    Read the article

  • import utility in oracle

    - by Abhiram
    i do export of an tablespace. This tablespace contains table1 with two rows say rowa and rowb/ now i delete rowb and insert new rowc into this tablespace. Now i do import of this tablespace. after importing i see the table1 in tablespace contains rowa, rowb,rowc but it was suppose to contain rowa and rowb. Can anybody tell why is this happening so?

    Read the article

  • Rewriting URL's in ASP.net (Simple question)

    - by Tom Gullen
    On my master page I have: <link rel="stylesheet" href="css/default.css" /> I also have a page "blog.aspx" in the root directory. I have the rule: <rewrite url="~/blog/blog.aspx" to="~/blog.aspx" /> My questions are: Am I meant to make all my links in my site point to blog/blog.aspx now instead of just blog.aspx where it is physically located How is best to cope with the paths of the stylesheets etc now being messed up because they are one dir up?

    Read the article

  • May 20th Links: ASP.NET MVC, ASP.NET, .NET 4, VS 2010, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series and ASP.NET MVC 2 series for other on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET MVC How to Localize an ASP.NET MVC Application: Michael Ceranski has a good blog post that describes how to localize ASP.NET MVC 2 applications. ASP.NET MVC with jTemplates Part 1 and Part 2: Steve Gentile has a nice two-part set of blog posts that demonstrate how to use the jTemplate and DataTable jQuery libraries to implement client-side data binding with ASP.NET MVC. CascadingDropDown jQuery Plugin for ASP.NET MVC: Raj Kaimal has a nice blog post that demonstrates how to implement a dynamically constructed cascading dropdownlist on the client using jQuery and ASP.NET MVC. How to Configure VS 2010 Code Coverage for ASP.NET MVC Unit Tests: Visual Studio enables you to calculate the “code coverage” of your unit tests.  This measures the percentage of code within your application that is exercised by your tests – and can give you a sense of how much test coverage you have.  Gunnar Peipman demonstrates how to configure this for ASP.NET MVC projects. Shrinkr URL Shortening Service Sample: A nice open source application and code sample built by Kazi Manzur that demonstrates how to implement a URL Shortening Services (like bit.ly) using ASP.NET MVC 2 and EF4.  More details here. Creating RSS Feeds in ASP.NET MVC: Damien Guard has a nice post that describes a cool new “FeedResult” class he created that makes it easy to publish and expose RSS feeds from within ASP.NET MVC sites. NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2: Nice two-part blog series by Shiju Varghese on how to use MongoDB (a document database) with ASP.NET MVC.  If you are interested in document databases also make sure to check out the Raven DB project from Ayende. Using the FCKEditor with ASP.NET MVC: Quick blog post that describes how to use FCKEditor – an open source HTML Text Editor – with ASP.NET MVC. ASP.NET Replace Html.Encode Calls with the New HTML Encoding Syntax: Phil Haack has a good blog post that describes a useful way to quickly update your ASP.NET pages and ASP.NET MVC views to use the new <%: %> encoding syntax in ASP.NET 4.  I blogged about the new <%: %> syntax – it provides an easy and concise way to HTML encode content. Integrating Twitter into an ASP.NET Website using OAuth: Scott Mitchell has a nice article that describes how to take advantage of Twiter within an ASP.NET Website using the OAuth protocol – which is a simple, secure protocol for granting API access. Creating an ASP.NET report using VS 2010 Part 1, Part 2, and Part 3: Raj Kaimal has a nice three part set of blog posts that detail how to use SQL Server Reporting Services, ASP.NET 4 and VS 2010 to create a dynamic reporting solution. Three Hidden Extensibility Gems in ASP.NET 4: Phil Haack blogs about three obscure but useful extensibility points enabled with ASP.NET 4. .NET 4 Entity Framework 4 Video Series: Julie Lerman has a nice, free, 7-part video series on MSDN that walks through how to use the new EF4 capabilities with VS 2010 and .NET 4.  I’ll be covering EF4 in a blog series that I’m going to start shortly as well. Getting Lazy with System.Lazy: System.Lazy and System.Lazy<T> are new features in .NET 4 that provide a way to create objects that may need to perform time consuming operations and defer the execution of the operation until it is needed.  Derik Whittaker has a nice write-up that describes how to use it. LINQ to Twitter: Nifty open source library on Codeplex that enables you to use LINQ syntax to query Twitter. Visual Studio 2010 Using Intellitrace in VS 2010: Chris Koenig has a nice 10 minute video that demonstrates how to use the new Intellitrace features of VS 2010 to enable DVR playback of your debug sessions. Make the VS 2010 IDE Colors look like VS 2008: Scott Hanselman has a nice blog post that covers the Visual Studio Color Theme Editor extension – which allows you to customize the VS 2010 IDE however you want. How to understand your code using Dependency Graphs, Sequence Diagrams, and the Architecture Explorer: Jennifer Marsman has a nice blog post describes how to take advantage of some of the new architecture features within VS 2010 to quickly analyze applications and legacy code-bases. How to maintain control of your code using Layer Diagrams: Another great blog post by Jennifer Marsman that demonstrates how to setup a “layer diagram” within VS 2010 to enforce clean layering within your applications.  This enables you to enforce a compiler error if someone inadvertently violates a layer design rule. Collapse Selection in Solution Explorer Extension: Useful VS 2010 extension that enables you to quickly collapse “child nodes” within the Visual Studio Solution Explorer.  If you have deeply nested project structures this extension is useful. Silverlight and Windows Phone 7 Building a Simple Windows Phone 7 Application: A nice tutorial blog post that demonstrates how to take advantage of Expression Blend to create an animated Windows Phone 7 application. If you haven’t checked out my Windows Phone 7 Twitter Tutorial I also recommend reading that. Hope this helps, Scott P.S. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.

    Read the article

  • March 21st Links: ASP.NET, ASP.NET MVC, AJAX, Visual Studio, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.  [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET URL Routing in ASP.NET 4: Scott Mitchell has a nice article that talks about the new URL routing features coming to Web Forms applications with ASP.NET 4.  Also check out my previous blog post on this topic. Control of Web Control ClientID Values in ASP.NET 4: Scott Mitchell has a nice article that describes how it is now easy to control the client “id” value emitted by server controls with ASP.NET 4. Web Deployment Made Awesome: Very nice MIX10 talk by Scott Hanselman on the new web deployment features coming with VS 2010, MSDeploy, and .NET 4.  Makes deploying web applications much, much easier. ASP.NET 4’s Browser Capabilities Support: Nice blog post by Stephen Walther that talks about the new browser definition capabilities support coming with ASP.NET 4. Integrating Twitter into an ASP.NET Website: Nice article by Scott Mitchell that demonstrates how to call and integrate Twitter from within your ASP.NET applications. Improving CSS with .LESS: Nice article by Scott Mitchell that describes how to optimize CSS using .LESS – a free, open source library. ASP.NET MVC Upgrading ASP.NET MVC 1 applications to ASP.NET MVC 2: Eilon Lipton from the ASP.NET team has a nice post that describes how to easily upgrade your ASP.NET MVC 1 applications to ASP.NET MVC 2.  He has an automated tool that makes this easy. Note that automated MVC upgrade support is also built-into VS 2010.  Use the tool in this blog post for updating existing MVC projects using VS 2008. Advanced ASP.NET MVC 2: Nice video talk by Brad Wilson of the ASP.NET MVC team.  In it he describes some of the more advanced features in ASP.NET MVC 2 and how to maximize your productivity with them. Dynamic Select Lists with ASP.NET MVC and jQuery: Michael Ceranski has a nice blog post that describes how to dynamically populate dropdownlists on the client using AJAX. AJAX Microsoft AJAX Minifier: We recently shipped an updated minifier utility that allows you to shrink/minify both JavaScript and CSS files – which can improve the performance of your web applications.  You can run this either manually as a command-line tool or now automatically integrate it using a Visual Studio build task.  You can download it for free here. Visual Studio VS 2010 Tip: Quickly Closing Documents: Nice blog post that describes some techniques for optimizing how windows are closed with the new VS 2010 IDE. Collpase to Definitions with Outlining: Nice tip from Zain on how to collapse your code editor to outline mode using Ctrl + M, Ctrl + O.  Also check out his post on copy/paste with outlining here. $299 VS 2010 Upgrade Offer for VS 2005/2008 Standard Users: Soma blogs about a nice VS 2010 upgrade offer you can take advantage of if you have VS 2005 or VS 2008 Standard editions.  For $299 you can upgrade to VS 2010 Professional edition. Dependency Graphics: Jason Zander (who runs the VS team) has a nice blog post that covers the new dependency graph support within VS 2010.  This makes it easier to visualize the dependencies within your application.  Also check out this video here. Layer Validation: Jason Zander has a nice blog post that talks about the new layer validation features in VS 2010.  This enables you to enforce cleaner layering within your projects and solutions.  VS 2010 Profiler Blog: The VS 2010 Profiler Team has their own blog and on it you can find a bunch of nice posts from the last few months that talk about a lot of the new features coming with VS 2010’s Profiler support.  Some really nice features coming. Silverlight Silverlight 4 Training Course: Nice free set of training courses from Microsoft that can help bring you up to speed on all of the new Silverlight 4 features and how to build applications with them.  Updated and current with the recently released Silverlight 4 RC build and tools. Getting Started with Silverlight and Windows Phone 7 Development: Nice blog post by Tim Heuer that summarizes how to get started building Windows Phone 7 applications using Silverlight.  Also check out my blog post from last week on how to build a Windows Phone 7 Twitter application using Silverlight. A Guide to What Has Changed with the Silverlight 4 RC: Nice summary post by Tim Heuer that describes all of the things that have changed between the Silverlight 4 Beta and the Silverlight 4 RC. Path Based Layout - Part 1 and Part 2: Christian Schormann has a nice blog post about a really cool new feature in Expression Blend 4 and Silverlight 4 called Path Layout. Also check out Andy Beaulieu’s blog post on this. Hope this helps, Scott

    Read the article

  • ODI and OBIEE 11g Integration

    - by David Allan
    Here we will see some of the connectivity options to OBIEE 11g using the JDBC driver. You’ll see based upon some connection properties how the physical or presentation layers can be utilized. In the integrators guide for OBIEE 11g you will find a brief statement indicating that there actually is a JDBC driver for OBIEE. In OBIEE 11g its now possible to connect directly to the physical layer, Venkat has an informative post here on this topic. In ODI 11g the Oracle BI technology is shipped with the product along with KMs for reverse engineering, and using OBIEE models for a data source. When you install OBIEE in 11g a light weight demonstration application is preinstalled in the server, when you open this in the BI Administration tool we see the regular 3 panel view within the administration tool. To interrogate this system via JDBC (just like ODI does using the KMs) need a couple of things; the JDBC driver from OBIEE 11g, a java client program and the credentials. In my java client program I want to connect to the OBIEE system, when I connect I can interrogate what the JDBC driver presents for the metadata. The metadata projected via the JDBC connection’s DatabaseMetadata changes depending on whether the property NQ_SESSION.SELECTPHYSICAL is set when the java client connects. Let’s use the sample app to illustrate. I have a java client program here that will print out the tables in the DatabaseMetadata, it will also output the catalog and schema. For example if I execute without any special JDBC properties as follows; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass Then I get the following returned representing the presentation layer, the sample I used is XML, and has no schema; Catalog Schema Table Sample Sales Lite null Base Facts Sample Sales Lite null Calculated Facts …     Sample Targets Lite null Base Facts …     Now if I execute with the only difference being the JDBC property NQ_SESSION.SELECTPHYSICAL with the value Yes, then I see a different set of values representing the physical layer in OBIEE; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass NQ_SESSION.SELECTPHYSICAL=Yes The following is returned; Catalog Schema Table Sample App Lite Data null D01 Time Day Grain Sample App Lite Data null F10 Revenue Facts (Order grain) …     System DB (Update me)     …     If this was a database system such as Oracle, the catalog value would be the OBIEE database name and the schema would be the Oracle database schema. Other systems which have real catalog structure such as SQLServer would use its catalog value. Its this ‘Catalog’ and ‘Schema’ value that is important when integration OBIEE with ODI. For the demonstration application in OBIEE 11g, the following illustration shows how the information from OBIEE is related via the JDBC driver through to ODI. In the XML example above, within ODI’s physical schema definition on the right, we leave the schema blank since the XML data source has no schema. When I did this at first, I left the default value that ODI places in the Schema field since which was ‘<Undefined>’ (like image below) but this string is actually used in the RKM so ended up not finding any tables in this schema! Entering an empty string resolved this. Below we see a regular Oracle database example that has the database, schema, physical table structure, and how this is defined in ODI.   Remember back to the physical versus presentation layer usage when we passed the special property, well to do this in ODI, the data server has a panel for properties where you can define key/value pairs. So if you want to select physical objects from the OBIEE server, then you must set this property. An additional changed in ODI 11g is the OBIEE connection pool support, this has been implemented via a ‘Connection Pool’ flex field for the Oracle BI data server. So here you set the connection pool name from the OBIEE system that you specifically want to use and this is used by the Oracle BI to Oracle (DBLINK) LKM, so if you are using this you must set this flex field. Hopefully a useful insight into some of the mechanics of how this hangs together.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • TFS API Add Favorites programmatically

    - by Tarun Arora
    01 – What are we trying to achieve? In this blog post I’ll be showing you how to add work item queries as favorites, it is also possible to use the same technique to add build definition as favorites. Once a shared query or build definition has been added as favorite it will show up on the team web access.  In this blog post I’ll be showing you a work around in the absence of a proper API how you can add queries to team favorites. 02 – Disclaimer There is no official API for adding favorites programmatically. In the work around below I am using the Identity service to store this data in a property bag which is used during display of favorites on the team web site. This uses an internal data structure that could change over time, there is no guarantee about the key names or content of the values. What is shown below is a workaround for a missing API. 03 – Concept There is no direct API support for favorites, but you could work around it using the identity service in TFS.  Favorites are stored in the property bag associated with the TeamFoundationIdentity (either the ‘team’ identity or the users identity depending on if these are ‘team’ or ‘my’ favorites).  The data is stored as json in the property bag of the identity, the key being prefixed by ‘Microsoft.TeamFoundation.Framework.Server.IdentityFavorites’. References - Microsoft.TeamFoundation.WorkItemTracking.Client - using Microsoft.TeamFoundation.Client; - using Microsoft.TeamFoundation.Framework.Client; - using Microsoft.TeamFoundation.Framework.Common; - using Microsoft.TeamFoundation.ProcessConfiguration.Client; - using Microsoft.TeamFoundation.Server; - using Microsoft.TeamFoundation.WorkItemTracking.Client; Services - IIdentityManagementService2 - TfsTeamService - WorkItemStore 04 – Solution Lets start by connecting to TFS programmatically // Create an instance of the services to be used during the program private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; private static TfsTeamService _tts; private static TeamSettingsConfigurationService _teamConfig; private static IIdentityManagementService2 _ids; // Connect to TFS programmatically public static bool ConnectToTfs() { var isSelected = false; var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } Lets get all the work item queries from the selected team project static readonly Dictionary<string, string> QueryAndGuid = new Dictionary<string, string>(); // Get all queries and query guid in the selected team project private static void GetQueryGuidList(IEnumerable<QueryItem> query) { foreach (QueryItem subQuery in query) { if (subQuery.GetType() == typeof(QueryFolder)) GetQueryGuidList((QueryFolder)subQuery); else { QueryAndGuid.Add(subQuery.Name, subQuery.Id.ToString()); } } }   Pass the name of a valid Team in your team project and a name of a valid query in your team project. The team details will be extracted using the team name and query GUID will be extracted using the query name. These details will be used to construct the key and value that will be passed to the SetProperty method in the Identity service.           Key           “Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..<TeamProjectURI>.<TeamId>.WorkItemTracking.Queries.<newGuid1>”           Value           "{"data":"<QueryGuid>","id":"<NewGuid1>","name":"<QueryKey>","type":"Microsoft.TeamFoundation.WorkItemTracking.QueryItem”}"           // Configure a Work Item Query for the given team private static void ConfigureTeamFavorites(string teamName, string queryName) { _ids = _tfs.GetService<IIdentityManagementService2>(); var g = Guid.NewGuid(); var guid = string.Empty; var teamDetail = _tts.QueryTeams(_selectedTeamProject.Uri).FirstOrDefault(t => t.Name == teamName); foreach (var q in QueryAndGuid.Where(q => q.Key == queryName)) { guid = q.Value; } if(guid == string.Empty) { Console.WriteLine("Query '{0}' - Not found!", queryName); return; } var key = string.Format( "Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..{0}.{1}.WorkItemTracking.Queries{2}", new Uri(_selectedTeamProject.Uri).Segments.LastOrDefault(), teamDetail.Identity.TeamFoundationId, g); var value = string.Format( @"{0}""data"":""{1}"",""id"":""{2}"",""name"":""{3}"",""type"":""Microsoft.TeamFoundation.WorkItemTracking.QueryItem""{4}", "{", guid, g, QueryAndGuid.FirstOrDefault(q => q.Value==guid).Key, "}"); teamDetail.Identity.SetProperty(IdentityPropertyScope.Local, key, value); _ids.UpdateExtendedProperties(teamDetail.Identity); Console.WriteLine("{0}Added Query '{1}' as Favorite", Environment.NewLine, queryName); }   If you have any questions or suggestions leave a comment. Enjoy!

    Read the article

  • Coherence Warnings in WLS

    - by john.graves(at)oracle.com
    With 11g (10.3.4 WLS), coherence is now built into many applications.  I’ve been noticing errors in my OSB logs like these:####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340643> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Unic astUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS do cumentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performanc e.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340650> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Pref erredUnicastUdpSocket failed to set receive buffer size to 1428 packets (1.99MB); actual size is 6%, 89 packets (127KB). Consult y our OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal p erformance.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340659> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Mult icastUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performa nce.> I was able to “fix” this on my ubuntu system by adding the following lines to the /etc/sysctl.conf file:# Setup networking for coherence # maximum receive socket buffer size, default 131071 net.core.rmem_max = 2000000 # maximum send socket buffer size, default 131071 net.core.wmem_max = 1000000 # default receive socket buffer size, default 65535 net.core.rmem_default = 2524287 # default send socket buffer size, default 65535 net.core.wmem_default = 2524287 .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • SPARC T4 ??????: SPARC T4 ??????????!!

    - by user13138700
    ?? 2011 ? 9 ?? SPARC T4 CPU ???????? SPARC T4 ????????????????2011??10?????????????????????????? ????????????????????SPARC T4 ?????????????????????????????????????????????????????????? SPARC T4 CPU ???? SPARC T4 ?????????????????????????????????? ??????????????????????4/4, 4/5, 4/6 ? 3???????? Oracle Open World 2012 ???????? Oracle Open World 2012 Tokyo ?? Oracle ?????&????? ??? Oracle Solaris ????????????·????????? SPARC&Solaris ??????????????SPARC&Solaris ????????????????????????????????????????????????????????????????????????? Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ??????????????? ????Oracle Open World 2012 Tokyo ?????????????????????????SPARC T4 ????? ????????????????? SPARC T4 ????????? SPARC T3 ????????(S2??)??????????????????????????(S3??)??????????????????? ???????" T " ???????????????(?)?????? SPARC T1/T2/T3 ???????????????????????????(????????)????????????????????????? ?SPARC T4 ????????????????????????????? ?SPARC T4 ???????DB?????????????????????????????? ???????????????? ????????????????????????????????????????????? ???? SPARC T3 ???????????????????????????2???????????? ????????????????????????????????????????????????????? ?????????????? SPARC T4 ????????????????????????????????????SPARC T4 ????????? SPARC T4 ??????????????????????????????????????????? ?????????????? T4 ??????????????????? SPARC ???????????????????????????????????????????????????????????????????&??????????? ?????????????????????????????????????????????????????????Web?????????????DB?????????????????????????????????????? (????????????) ???????????? SPARC T4 ????????????????????????????? < T4 ???????? > ??? SPARC ??(S3??)??? x5??????????????????? x2????????????????????? Crypto (?????)?????????? ?????????????????????????/???????????????? ?????? 1, 2,& 4 ??????????? < T4 ????? ??????? > 8x SPARC S3 ?? (64????/???) 4MB ?? L3 ????? (8???/16???) 8x9 ????? 4x DDR3 ??????????? @6.4Gbps 6x ?????????? @9.6Gbps 2x8 PCIe 2.0 (5GTS) 2x10Gb XAUI ??????? < S3???????????? > ALU : Arithmetic Logic Unit BRU : Branch Logic Unit FGU : Flouting-point Graphics Unit IRF : Integer Register File FRF : Flouting-point Register File WRF : Working Register File MMU : Memory Management Unit LSU : Load Store Unit Crypto(SPU) : Streaming Processing Unit TRU : Trap Logic Unit < S3????????? > ????? 8????/?? ?????? Out-of-Order ?? 16???????????????? ????????????? ???????????? ??????? ????????? 64???? ITLB ? 128???? DTLB 64KB 4??? L1 ?????????????? 128KB 8??? ???? L2 ????? < T4 ???????? vs T3 ???????? > T4 ????????????? Out-Of-Order ???? Pick ???????? In-Order ?? Pick ?????? Commit ??????? Out-Of-Order ?? Commit ?????? In-Order ?? < T4 ?????????? > ???????????vs????????????????????????????? ????????Active??????????????????? ???????????????????????? ??????????????????? < T4vsT1/T2/T3 ??????? > SPARC T4 ???? T3????????Web??????????? DB?????????????????????????????? ????????????????????SPARC T4 ?????&Solaris ?????????????(????????)??????????????????????????????????????????????????????????!!? ????Oracle Open World 2012 Tokyo ????????????????SPARC T4 ?????????????????????? 4/4, 4/5, 4/6 ?3????????????????????????????????????????????????????????????????????????????????????? ????????????????? URL http://www.oracle.com/openworld/jp-ja/exhibit/index.html

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • SQL Server and Hyper-V Dynamic Memory - Part 1

    - by SQLOS Team
    SQL and Dynamic Memory Blog Post Series   Hyper-V Dynamic Memory is a new feature in Windows Server 2008 R2 SP1 that allows the memory assigned to guest virtual machines to vary according to demand. Using this feature with SQL Server is supported, but how well does it work in an environment where available memory can vary dynamically, especially since SQL Server likes memory, and is not very eager to let go of it? The next three posts will look at this question in detail. In Part 1 Serdar Sutay, a program manager in the Windows Hyper-V team, introduces Dynamic Memory with an overview of the basic architecture, configuration and monitoring concepts. In subsequent parts we will look at SQL Server memory handling, and develop some guidelines on using SQL Server with Dynamic Memory.   Part 1: Dynamic Memory Introduction   In virtualized environments memory is often the bottleneck for reaching higher VM densities. In Windows Server 2008 R2 SP1 Hyper-V introduced a new feature “Dynamic Memory” to improve VM densities on Hyper-V hosts. Dynamic Memory increases the memory utilization in virtualized environments by enabling VM memory to be changed dynamically when the VM is running.   This brings up the question of how to utilize this feature with SQL Server VMs as SQL Server performance is very sensitive to the memory being used. In the next three posts we’ll discuss the internals of Dynamic Memory, SQL Server Memory Management and how to use Dynamic Memory with SQL Server VMs.   Memory Utilization Efficiency in Virtualized Environments   The primary reason memory is usually the bottleneck for higher VM densities is that users tend to be generous when assigning memory to their VMs. Here are some memory sizing practices we’ve heard from customers:   ·         I assign 4 GB of memory to my VMs. I don’t know if all of it is being used by the applications but no one complains. ·         I take the minimum system requirements and add 50% more. ·         I go with the recommendations provided by my software vendor.   In reality correctly sizing a virtual machine requires significant effort to monitor the memory usage of the applications. Since this is not done in most environments, VMs are usually over-provisioned in terms of memory. In other words, a SQL Server VM that is assigned 4 GB of memory may not need to use 4 GB.   How does Dynamic Memory help?   Dynamic Memory improves the memory utilization by removing the requirement to determine the memory need for an application. Hyper-V determines the memory needed by applications in the VM by evaluating the memory usage information in the guest with Dynamic Memory. VMs can start with a small amount of memory and they can be assigned more memory dynamically based on the workload of applications running inside.   Overview of Dynamic Memory Concepts   ·         Startup Memory: Startup Memory is the starting amount of memory when Dynamic Memory is enabled for a VM. Dynamic Memory will make sure that this amount of memory is always assigned to the VMs by default.   ·         Maximum Memory: Maximum Memory specifies the maximum amount of memory that a VM can grow to with Dynamic Memory. ·         Memory Demand: Memory Demand is the amount determined by Dynamic Memory as the memory needed by the applications in the VM. In Windows Server 2008 R2 SP1, this is equal to the total amount of committed memory of the VM. ·         Memory Buffer: Memory Buffer is the amount of memory assigned to the VMs in addition to their memory demand to satisfy immediate memory requirements and file cache needs.   Once Dynamic Memory is enabled for a VM, it will start with the “Startup Memory”. After the boot process Dynamic Memory will determine the “Memory Demand” of the VM. Based on this memory demand it will determine the amount of “Memory Buffer” that needs to be assigned to the VM. Dynamic Memory will assign the total of “Memory Demand” and “Memory Buffer” to the VM as long as this value is less than “Maximum Memory” and as long as physical memory is available on the host.   What happens when there is not enough physical memory available on the host?   Once there is not enough physical memory on the host to satisfy VM needs, Dynamic Memory will assign less than needed amount of memory to the VMs based on their importance. A concept known as “Memory Weight” is used to determine how much VMs should be penalized based on their needed amount of memory. “Memory Weight” is a configuration setting on the VM. It can be configured to be higher for the VMs with high performance requirements. Under high memory pressure on the host, the “Memory Weight” of the VMs are evaluated in a relative manner and the VMs with lower relative “Memory Weight” will be penalized more than the ones with higher “Memory Weight”.   Dynamic Memory Configuration   Based on these concepts “Startup Memory”, “Maximum Memory”, “Memory Buffer” and “Memory Weight” can be configured as shown below in Windows Server 2008 R2 SP1 Hyper-V Manager. Memory Demand is automatically calculated by Dynamic Memory once VMs start running.     Dynamic Memory Monitoring    In Windows Server 2008 R2 SP1, Hyper-V Manager displays the memory status of VMs in the following three columns:         ·         Assigned Memory represents the current physical memory assigned to the VM. In regular conditions this will be equal to the sum of “Memory Demand” and “Memory Buffer” assigned to the VM. When there is not enough memory on the host, this value can go below the Memory Demand determined for the VM. ·         Memory Demand displays the current “Memory Demand” determined for the VM. ·         Memory Status displays the current memory status of the VM. This column can represent three values for a VM: o   OK: In this condition the VM is assigned the total of Memory Demand and Memory Buffer it needs. o   Low: In this condition the VM is assigned all the Memory Demand and a certain percentage of the Memory Buffer it needs. o   Warning: In this condition the VM is assigned a lower memory than its Memory Demand. When VMs are running in this condition, it’s likely that they will exhibit performance problems due to internal paging happening in the VM.    So far so good! But how does it work with SQL Server?   SQL Server is aggressive in terms of memory usage for good reasons. This raises the question: How do SQL Server and Dynamic Memory work together? To understand the full story, we’ll first need to understand how SQL Server Memory Management works. This will be covered in our second post in “SQL and Dynamic Memory” series. Meanwhile if you want to dive deeper into Dynamic Memory you can check the below posts from the Windows Virtualization Team Blog:   http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx   http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx   http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx   http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx   http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx   http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx   - Serdar Sutay   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Call for Customer Examples and Stories--PeopleTools 8.50

    - by PeopleTools Strategy Team
    PeopleTools 8.50 was a big release for us, and one that we think will provide a lot of value for customers. We've been having some interesting conversations with customers about this release at conferences, advisory board meetings, and technical group meetings. However, we would like to solicit some examples and success stories from you, our broad customer base. Do you have some examples of how you are using PeopleTools 8.50 and Enterprise Portal 9.1 that you would be willing to share? We would like to see some screen shots and perhaps a short blurb describing how you are using the Tools and Portal features, as well as any benefits accrued. Do you have a compelling success story? We are particularly interested in hearing about quantifiable improvements in user productivity, performance, cost savings, etc. You should be aware that these screen shots and stories will be public, and could appear in a conference presentation at some point. You will not be asked to serve as a formal reference, however. If you have some stories and examples you'd be willing share with us, please send them to this email address for the PeopleTools team: [email protected]

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >