Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 669/761 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Best WordPress Shopping Cart & Ecommerce Plugins

    - by Edward
    A versatile WordPress Shopping Cart plugin can help you create a feature-rich online store on your WordPress-powered website or blog. Some are so advanced that you can get your store up and running in minutes. Some plugins allow you to take ecommerce to a next level with their high end customization tools. Here is a list of best WP shopping cart plugins available: Cart66 One of the best WordPress plugin with lots of features, great quality and ease of use. It accepts few more payment getways such as PayPal Website Payments Standard, PayPal Website Payments Professional, PayPal Express Checkout, eProcessing Network etc. It has flexible design options, recurring payments for subscriptions, memberships, and payment plans, Easy PCI Compliance – Safe and Secure. It is fast and efficient, one can sell digital and physical products and support is good. Price: Standard $49 & Professional $99 Details Download StorePress StorePress is a WordPress theme, which is fully coded. It comes with scripts that can change a WordPress blog into a veritable e-commerce virtual store. With this great premium WordPress theme, one can start affiliate stores, or promote affiliate products. Price: Single $59.99 & Developer License $119.99 Details Download WordPress eStore Plugin This shopping cart plugin comes with easy checkout, ease of design and use, automatic instant digital product delivery, Next Gen gallery integration, autoresponder integration etc. It is a lightweight shopping cart and allows multi site license. This plugin offers an amazingly comprehensive toolkit that will ensure your online shop is almost just plug-and-play. Price: $49.99 Details Download Shoppers Press Shoppers press is a premium cart for Word Press that comes with 20+ to choose from and 20+ built in payment gateways. It features one-click setups, personalized user accounts, easy management tools, detailed sales tracking, promotional options, a variety of product import tools, and many more features Price:$79 Details Download WordPress Shopping Cart plugin The WordPress Shopping Cart plugin by Tribulant quickly and seamlessly integrates an online shop with a fully functional shopping cart interface into any WordPress website. It has easy to use interface, which enables set up of multiple products and categorize and organizing them into multiple product categories. It also has many more attractive features. Price: $49.99 Details Download WP e-commerce WP e-commerce is a free full-featured shopping cart plugin for WordPress. It is a full featured shopping cart and boasts of easy checkout. It offers a wide range of features including SSL compatibility, customization and merchandising, integrated payment processing solutions including manual payment, Google Checkout and PayPal Payments, and email marketing. It is wordpress and social networking integrated. It is customizable by use of PHP template tag, wordpress shortcode and widgets. Details Download YAK for WordPress YAK is an open source shopping cart plugin for WordPress. It associates products with weblog entries (in other words, posts), so the post ID also becomes the product code. It supports both pages and posts as products, handles different types of product through categories. YAK supports downloadable products, so any e-books, plugins, or zip files you’re marketing can be easily purchased and dowloaded. Details Download Market Press It is another shopping cart full of many features. It offers following features such as assign categories and tags to products to make them easy to find, stock tracking with alerts, order management/alerts, fully customizable email messages, full support for most major currencies, fully customizable store urls/slugs, customers can checkout without being a site user etc. Expensive, but good option for those who can afford it. Price: $17.42/month Details Download Shopp It is an excellent shopping cart plugin for Word Press. This plugin is extremely easy to install and use. It has a cleaner interface. The customer support is good. Use can easily customize the look of the cart by using its amazing features. Price: $55 Details Download Related posts:8 PHP Shopping Cart Software for Reliable Ecommerce Solution Shopping Cart SEO 8 Free Open Source Shopping Carts

    Read the article

  • What makes them click ?

    - by Piet
    The other day (well, actually some weeks ago while relaxing at the beach in Kos) I read ‘Neuro Web Design - What makes them click?’ by Susan Weinschenk. (http://neurowebbook.com) The book is a fast and easy read (no unnecessary filler) and a good introduction on how your site’s visitors can be steered in the direction you want them to go. The Obvious The book handles some of the more known/proven techniques, like for example that ratings/testimonials of other people can help sell your product or service. Another well known technique it talks about is inducing a sense of scarcity/urgency in the visitor. Only 2 seats left! Buy now and get 33% off! It’s not because these are known techniques that they stop working. Luckily 2/3rd of the book handles less obvious techniques, otherwise it wouldn’t be worth buying. The Not So Obvious A less known influencing technique is reciprocity. And then I’m not talking about swapping links with another website, but the fact that someone is more likely to do something for you after you did something for them first. The book cites some studies (I always love the facts and figures) and gives some actual examples of how to implement this in your site’s design, which is less obvious when you think about it. Want to know more ? Buy the book! Other interesting sources For a more general introduction to the same principles, I’d suggest ‘Yes! 50 Secrets from the Science of Persuasion’. ‘Yes!…’ cites some of the same studies (it seems there’s a rather limited pool of studies covering this subject), but of course doesn’t show how to implement these techniques in your site’s design. I read ‘Yes!…’ last year, making ‘Neuro Web Design’ just a little bit less interesting. !!!Always make sure you’re able to measure your changes. If you haven’t yet, check out the advanced segmentation in Google Analytics (don’t be afraid because it says ‘beta’, it works just fine) and Google Website Optimizer. Worth Buying? Can I recommend it ? Sure, why not. I think it can be useful for anyone who ever had to think about the design or content of a site. You don’t have to be a marketing guy to want a site you’re involved with to be successful. The content/filler ratio is excellent too: you don’t need to wade through dozens of pages to filter out the interesting bits. (unlike ‘The Design of Sites’, which contains too much useless info and because it’s in dead-tree format, you can’t google it) If you like it, you might also check out ‘Yes! 50 Secrets from the Science of Persuasion’. Tip for people living in Europe: check Amazon UK for your book buying needs. Because of the low UK Pound exchange rate, it’s usually considerably cheaper and faster to get a book delivered to your doorstep by Amazon UK compared to having to order it at the local book store or web-shop.

    Read the article

  • The “Customer” Experience Revolution is Here

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Anthony Lye, SVP, Oracle Development The Experience Revolution is here, and we are going to explore and celebrate our new customer experience ventures and strategy in an extraordinary way. In true Oracle fashion, we are hosting an exceptional event, bringing together customer experience advocates, visionaries and practitioners to discover and define Oracle’s Customer Experience vision. The Experience Revolution is best described as today’s era of the empowered consumer. For those of us who work with customers on a daily basis, we know that the modern consumer demands fast, accurate, consistent information across all communication channels. And if they don’t like the services received can easily take to social channels to voice disapproval. For this reason, organizations today operate in an environment where traditional methods of differentiation are less effective and customer experience has become the primary driver of business value. Here’s some food for thought, according to the 2011 Customer Experience Impact (CEI) Report, a full 89 percent of consumers will switch brands for a better customer experience. In short, in today’s era of the empowered consumer, delivering excellent customer experiences is what will, and is, defining the next great brands. At The Experience Revolution, Oracle President Mark Hurd will detail the vision of where customer experience is going and how Oracle will help you get there. He will introduce for the first time Oracle Customer Experience, a cross stack suite of customer experience products that enable organizations to: Engage customers with a consistent, connected and personalized brand experience across all channels and devices Deliver exceptional cross-channel order fulfillment and customer service through web, call centers and social networks Connect and analyze data from all interactions to better personalize experiences and identify hidden opportunities The Experience Revolution will also include an interactive gallery of customer experience interactions, featuring videos, touch screens and near field communication technology that will guide each attendee through an individualized event experience. We hope you will join us for an incredible evening on June 25, from 6:00 – 9:00 p.m. at Gotham Hall in New York City. You can register for The Experience Revolution here. And if you haven’t already joined the conversation on Twitter, please do: #OracleCX, #ExperienceRevolution

    Read the article

  • Silverlight Firestarter thoughts, and thanks to one and all!

    - by Dave Campbell
    A few metrics that of course got out of hand, but some may find interesting:   1/2 My share of the MVP of the Year award in February of 2009 with Laurent Bugnion 2 Number of degrees I hold: B.S., M.S. Electrical Engineering 3 Number of years in the U.S. Army 3.5 Number of years SilverlighCream has been posted 4 Number of times awarded MVP 6 Number of professional positions I've worked: Antenna Rigger, Boilermaker, Musician, Electronic Technician, Hardware Engineer, Software Engineer 16 Number of companies I've worked for during my career as an Engineer 19 Age at which I turned my first line of code 28 Age at which I hit the workforce as an Engineer 33 Number of years working as an Engineer 43 Number of years writing code 62 Number of years since instantiation 116 Number of tags to search SilverlightCream with 645 Number of blogs I view to find articles (at this moment) 664 Number of articles tagged wp7dev at SilverlightCream right now 700 Number of Twitter followers for WynApse 981 Number of individual bloggers in the SilverlightCream database 1002 Number of SilverlightCream blogposts 1100 Number of people live in Redmond for the Firestarter (I think) 1428 Number of total blogposts at GeeksWithBlogs (not counting this one) 4200 Number of Feedburner subscribers (approximately) 6500 Number of Twitter followers for SilverlightNews (approximately) 7087 Number of posts tagged and aggregated at SilverlightCream right now 13000 Number of people registered to watch the Firestarter online (I think) The overwhelming feeling I have returning from the Silverlight Firestarter: Priceless There is absolutely no way that I could personally thank everyone that over the last few years has held their hand out and offered me a step up to get to the point that Scott Guthrie called me out in his keynote. So I'm just going to hit the highlights here... Scott Guthrie Thanks for not only being the level you are at Microsoft, but for being so approachable, easy to talk to, willing to help everyone, and above all knowledgable. My first level manager at my last position asked if Visual Studio was a graphics program... and you step up to a laptop at a conference and type "File->New Program" ... 'nuff said... oh yeah, thanks for the shoutout! John Papa Thanks for being a good friend, ramroding the Firestarter, being a great guy to be around, and for the poster... holy crap is that cool. Tim Heuer Thanks for all you did as a great DE in Phoenix, and for helping out so many of us, of course being a great guy, and for the poster as well... I think you and John shared that task. In no order at all my buddy Michael Washington, Laurent Bugnion (the other half of the first Silverlight MVP of the Year) Tim Sneath, Mike Harsh, Chad Campbell and Bryant Likes (from back in the day), Adam Kinney, Jesse Liberty, Jeff Paries, Pete Brown, András Velvárt, David Kelly, Michael Palermo, Scott Cate, Erik Mork, and on and on... don't feel bad if your name didn't appear, I have simply too many supporters to name. Silverlight Firestarter Indeed All the people mentioned here, and all the MVPs knew Silverlight was NOT dead, but because of a very unfortunate circumstance, the popular media opinion became that. Consequently the Firestarter exploded from a laid-back event to a global conference. People worked their ass off getting bits ready and presentations using those bits. All to stem the flow of misinformation. All involved please accept my personal thanks for an absolutely awesome job. I had the priviledge of watching the 'prep' on Wednesday afternoon, and was blown away the first time I saw the 3D demo... and have been blown away every time I've seen it since. Not to mention all the other goodness in Silverlight 5. Yes I hit 1000 on my blog, but more importantly, all of you are blogging and using Silverlight, and Microsoft hit one completely out of the park... no... they knocked it out of the neighborhood with the Firestarter. It was amazing to be there for it, and it will be awesome to use the new bits as we get them. Keep reading, there's tons more to come with Silverlight and SilverlightCream following along behind. As usual, this old hacker is humbled to be allowed to play with all the cool kids... Thanks one and all for everything, and Stay in the 'Light

    Read the article

  • Take,Skip and Reverse Operator in Linq

    - by Jalpesh P. Vadgama
    I have found three more new operators in Linq which is use full in day to day programming stuff. Take,Skip and Reverse. Here are explanation of operators how it works. Take Operator: Take operator will return first N number of element from entities. Skip Operator: Skip operator will skip N number of element from entities and then return remaining elements as a result. Reverse Operator: As name suggest it will reverse order of elements of entities. Here is the examples of operators where i have taken simple string array to demonstrate that. C#, using GeSHi 1.0.8.6 using System; using System.Collections.Generic; using System.Linq; using System.Text;     namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {             string[] a = { "a", "b", "c", "d" };                           Console.WriteLine("Take Example");             var TkResult = a.Take(2);             foreach (string s in TkResult)             {                 Console.WriteLine(s);             }               Console.WriteLine("Skip Example");             var SkResult = a.Skip(2);             foreach (string s in SkResult)             {                 Console.WriteLine(s);             }               Console.WriteLine("Reverse Example");             var RvResult = a.Reverse();             foreach (string s in RvResult)             {                 Console.WriteLine(s);             }                       }     } } Parsed in 0.020 seconds at 44.65 KB/s Here is the output as expected. hope this will help you.. Technorati Tags: Linq,Linq-To-Sql,ASP.NET,C#.NET

    Read the article

  • Adding Blog to Your Orchard Website

    - by hajan
    One of the common features in today’s content management systems is to provide you the ability to create your own blog in your website. Also, having a blog is one of the very often needed features for various types of websites. Out of the box, Orchard gives you this, so you can create your own blog in your Orchard website on a pretty easy way. Besides the fact that you can very easily create your own blog, Orchard also gives you some extra features in relation with the support of blogging, such as connecting third-party client applications (e.g. Windows Live Writer) to your blog, so that you can publish blog posts remotely. You can already find all the information provided in this blog post on the http://orchardproject.net website, however I thought it would be nice to make summary in one blog post. I assume you have already installed Orchard and you are already familiar with its environment and administration dashboard. If you haven’t, please read this blog post first.   CREATE YOUR BLOG First of all, go to Orchard Administration Dashboard and click on Blog in the left menu Once you are there, you will see the following screen   Fill the form with all needed data, as in the following example and click Save Right after, you should see the following screen Click New post, and add your first post. After that, go to Homepage (click Your Site in the top-left corner) and you should see the Blog link in your menu After clicking on Blog, you will be directed to the following page Once you click on My First Post, you will see that your blog already supports commenting ability (you can enable/disable this from Administration dashboard in your blog settings) Added comment Adding new comment Submit comment So, with following these steps, you have already setup your blog in your Orchard website.   CONNECT YOUR BLOG WITH WINDOWS LIVE WRITER Since many bloggers prepare their blog posts using third-party client applications, like Windows Live Writer, its very useful if your blog engine has the ability to work with these third-party applications and enable them to make remote posting and publishing. The client applications use XmlRpc interface in order to have the ability to manage and publish the blogs remotely. What is great about Orchard is that it gives you out of the box the XmlRpc and Remote Publishing modules. What you only need to do is to enable these features from the Modules in your Orchard Administration Dashboard. So, lets go through the steps of enabling and making your previously created blog able to work with third-party client applications for blogging. 1. Go to Administration Dashboard and click the Modules After clicking the Modules, you will see the following page: As you can see, you already have Remote Blog Publishing and XmlRpc features for Content Publishing, but both are disabled by default. So, if you click Enable only on Remote Blog Publishing, you will see both of them enabled at once since they are dependent features. After you click Enable, if everything is Ok, the following message should be displayed: So, now we have the featured enabled and ready... The next thing you need to do is to open Windows Live Writer. First, open Windows Live Writer and in your Blog Accounts, click on Add blog account In the next window, chose Other services After that, click on your Blog link in the Orchard website and copy the URL, my URL (on localhost development server) is: http://localhost:8191/blog Then, add your login credentials you use to login in Orchard and click Next. After that, if you have setup everything successfully, the Windows Live Writer will do the rest Once it finishes, you will have window where you can specify the name of your blog you have just connected your Windows Live Writer to... Then... you are done. You can see Windows Live Writer has detected the Orchard theme I am using After you finish with the blog post, click on Publish and refresh the Blog page in your Orchard website You see, we have the blog post directly posted from Windows Live Writer to my Orchard Blog. I hope this was useful blog post. Regards, Hajan Reference and other useful posts: Build incredible content-driven websites using Orchard CMS Create blog on your site with Orchard CMS Blogging using Windows Live Writer in your Orchard CMS Blog Orchard Website

    Read the article

  • Fusion Concepts: Fusion Database Schemas

    - by Vik Kumar
    You often read about FUSION and FUSION_RUNTIME users while dealing with Fusion Applications. There is one more called FUSION_DYNAMIC. Here are some details on the difference between these three and the purpose of each type of schema. FUSION: It can be considered as an Administrator of the Fusion Applications with all the corresponding rights and powers such as owning tables and objects, providing grants to FUSION_RUNTIME.  It is used for patching and has grants to many internal DBMS functions. FUSION_RUNTIME: Used to run the Applications.  Contains no DB objects. FUSION_DYNAMIC: This schema owns the objects that are created dynamically through ADM_DDL. ADM_DDL is a package that acts as a wrapper around the DDL statement. ADM_DDL support operations like truncate table, create index etc. As the above statements indicate that FUSION owns the tables and objects including FND tables so using FUSION to run applications is insecure. It would be possible to modify security policies and other key information in the base tables (like FND) to break the Fusion Applications security via SQL injection etc. Other possibilities would be to write a logon DB trigger and steal credentials etc. Thus, to make Fusion Applications secure FUSION_RUNTIME is granted privileges to execute DMLs only on APPS tables. Another benefit of having separate users is achieving Separation of Duties (SODs) at schema level which is required by auditors. Below are the roles and privileges assigned to FUSION, FUSION_RUNTIME and FUSION_DYNAMIC schema: FUSION It has the following privileges: Create SESSION Do all types of DDL owned by FUSION. Additionally, some specific priveleges on other schemas is also granted to FUSION. EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: CTXAPP for managing Oracle Text Objects AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_RUNTIME It has the following privileges: CREATE SESSION CHANGE NOTIFICATION EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: FUSION_APPS_READ_WRITE for performing DML (Select, Insert, Delete) on Fusion Apps tables FUSION_APPS_EXECUTE for performing execute on objects such as procedures, functions, packages etc. AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_DYNAMIC It has following privileges: CREATE SESSION, PROCEDURE, TABLE, SEQUENCE, SYNONYM, VIEW UNLIMITED TABLESPACE ANALYZE ANY CREATE MINING MODEL EXECUTE on specific procedure, function or package and SELECT on specific tables. This depends on the objects identified by product teams that ADM_DDL needs to have access  in order to perform dynamic DDL statements. There is one more role FUSION_APPS_READ_ONLY which is not attached to any user and has only SELECT privilege on all the Fusion objects. FUSION_RUNTIME does not have any synonyms defined to access objects owned by FUSION schema. A logon trigger is defined in FUSION_RUNTIME which sets the current schema to FUSION and eliminates the need of any synonyms.   What it means for developers? Fusion Application developers should be using FUSION_RUNTIME for testing and running Fusion Applications UI, BC and to connect to any SQL front end like SQL *PLUS, SQL Loader etc. For testing ADFbc using AM tester while using FUSION_RUNTIME you may hit the following error: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=invalid name pattern: FUSION.FND_TABLE_OF_VARCHAR2_255 The fix is to add the below JVM parameter in the Run/Debug client property in the Model project properties -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true More details are discussed in this forum thread for it.

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008

    - by pinaldave
    After successfully delivering many corporate trainings as well as the private training Solid Quality Mentors, India is launching the Public Training in Hyderabad for SQL Server 2008 and SharePoint 2010. This is going to be one of the most unique and one-of-a-kind events in India where Solid Quality Mentors are offering public classes. I will be leading the training on Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. This intensive, 3-day course intends to give attendees an in-depth look at Query Optimization and Performance Tuning in SQL Server 2005 and 2008. Designed to prepare SQL Server developers and administrators for a transition into SQL Server 2005 or 2008, the course covers the best practices for a variety of essential tasks in order to maximize the performance. At the end of the course, there would be daily discussions about your real-world problems and find appropriate solutions. Note: Scroll down for course fees, discount, dates and location. Do not forget to take advantage of Discount code ‘SQLAuthority‘. The training premises are very well-equipped as they will be having 1:1 computers. Every participant will be provided with printed course materials. I will pick up your entire lunch tab and we will have lots of SQL talk together. The best participant will receive a special gift at the end of the course. Even though the quality of the material to be delivered together with the course will be of extremely high standard, the course fees are set at a very moderate rate. The fee for the course is INR 14,000/person for the whole 3-day convention. At the rate of 1 USD = 44 INR, this fee converts to less than USD 300. At this rate, it is totally possible to fly from anywhere from the world to India and take the training and still save handsome pocket money. It would be even better if you register using the discount code “SQLAuthority“, for you will instantly get an INR 3000 discount, reducing the total cost of the training to INR 11,000/person for whole 3 days course. This is a onetime offer and will not be available in the future. Please note that there will be a 10.3% service tax on course fees. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at [email protected] for any additional information and clarification. Training Date and Time: May 12-14, 2010 10 AM- 6 PM. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. The details of the course is as listed below. Day 1 : Strengthen the basics along with SQL Server 2005/2008 New Features Module 01: Subqueries, Ranking Functions, Joins and Set Operations Module 02: Table Expressions Module 03: TOP and APPLY Module 04: SQL Server 2008 Enhancements Day 2: Query Optimization & Performance Tuning 1 Module 05: Logical Query Processing Module 06: Query Tuning Module 07:  Introduction to the Query Processor Module 08:  Review of common query coding which causes poor performance Day 3: Query Optimization & Performance Tuning 2 Module 09:  SQL Server Indexing and index maintenance Module 10:  Plan Guides, query hints, UDFs, and Computed Columns Module 11:  Understanding SQL Server Execution Plans Module 12: Real World Index and Optimization Tips Download the complete PDF brochure. We are also going to have SharePoint 2010 training by Joy Rathnayake on 10-11 May. All the details for discount applies to the same as well. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • What's new in ASP.Net 4.5 and VS 2012 - part 2

    - by nikolaosk
    This is the second post in a series of posts titled "What's new in ASP.Net 4.5 and VS 2012".You can have a look at the first post in this series, here. Please find all my posts regarding VS 2012, here. In this post I will be looking into the various new features available in ASP.Net 4.5 and VS 2012.I will be looking into the enhancements in the HTML Editor,CSS Editor and Javascript Editor.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.I will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Web Site - > ASP.Net Web Forms Site.2) Choose an appropriate name for your web site.3) I would like to point out the new enhancements in the CSS editor in VS 2012. In the Solution Explorer in the Content folder and open the Site.cssThen when I try to change the background-color property of the html element, I get a brand new handy color-picker. Have a look at the picture below  Please note that the color-picker shows all the colors that have been used in this website. Then you can expand the color-picker by clicking on the arrows. Opacity is also supported. Have a look at the picture below4) There are also mobile styles in the Site.css .These are based on media queries.Please have a look at another post of mine on CSS3 media queries. Have a look at the picture below In this case when the maximum width of the screen is less than 850px there will be a new layout that will be dictated by these new rules. Also CSS snippets are supported. Have a look at the picture below I am writing a new CSS rule for an image element. I write the property transform and hit tab and then I have cross-browser CSS handling all of the major vendors.Then I simply add the value rotate and it is applied to all the cross browser options.Have a look at the picture below.  I am sure you realise how productive you can become with all these CSS snippets. 5) Now let's have a look at the new HTML editor enhancements in VS 2012You can drag and drop a GridView web server control from the Toolbox in the Site.master file.You will see a smart tag (that was only available in the Design View) that you can expand and add fields, format the web server control.Have a look at the picture below 6) We also have available code snippets. I type <video and then press tab twice.By doing that I have the rest of the HTML 5 markup completed.Have a look at the picture below 7) I have new support for the input tag including all the HTML 5 types and all the new accessibility features.Have a look at the picture below   8) Another interesting feature is the new Intellisense capabilities. When I change the DocType to 4.01 and the type <audio>,<video> HTML 5 tags, Intellisense does not recognise them and add squiggly lines.Have a look at the picture below All these features support ASP.Net Web forms, ASP.Net MVC applications and Web Pages. 9) Finally I would like to show you the enhanced support that we have for Javascript in VS 2012. I have full Intellisense support and code snippets support.I create a sample javascript file. I type If and press tab. I type while and press tab.I type for and press tab.In all three cases code snippet support kicks in and completes the code stack. Have a look at the picture below We also have full Intellisense support.Have a look at the picture below I am creating a simple function and then type some sort of XML like comments for the input parameters. Have a look at the picture below. Then when I call this function, Intellisense has picked up the XML comments and shows the variables data types.Have a look at the picture below Hope it helps!!!

    Read the article

  • Oracle Enterprise Manager 11g is Here!

    - by chung.wu
    We hope that you enjoyed the launch event. If you missed it, you may still watch it via our on demand webcast, which is being produced and will be posted very shortly. 11gR1 is a major release of Oracle Enterprise Manager, and as one would expect from a big release, there are many new capabilities that appeal to a broad set of audience. Before going into the laundry list of new features, let's talk about the key themes for this release to put things in perspective. First, this release is about Business Driven Application Management. The traditional paradigm of component centric systems management simply cannot satisfy the management needs of modern distributed applications, as they do not provide adequate visibility of whether these applications are truly meeting the service level expectations of the business users. Business Driven Application Management helps IT manage applications according to the needs of the business users so that valuable IT resources can be better focused to help deliver better business results. To support Business Driven Application Management, 11gR1 builds on the work that we started in 10g to provide better support for user experience management. This capability helps IT better understand how users use applications and the experience that the applications provide so that IT can take actions to help end users get their work done more effectively. In addition, this release also delivers improved business transaction management capabilities to make it faster and easier to understand and troubleshoot transaction problems that impact end user experience. Second, this release includes strengthened Integrated Application-to-Disk Management. Every component of an application environment, from the application logic to the application server, to database, host machines and storage devices, etc... can affect end user experience. After user experience improvement needs are identified, IT needs tools that can be used do deep dive diagnostics for each of the application environment component, analyze configurations and deploy changes. Enterprise Manager 11gR1 extends coverage of key application environment components to include full support for Oracle Database 11gR2, Exadata V2, and Fusion Middleware 11g. For composite and Java application management, two key pieces of technologies, JVM Diagnostic and Composite Application Monitoring and Modeler, are now fully integrated into Enterprise Manager so there is no need to install and maintain separate tools. In addition, we have delivered the first set of integration between Enterprise Manager Grid Control and Enterprise Manager Ops Center so that hardware level events can be centrally monitored via Grid Control. Finally, this release delivers Integrated Systems Management and Support for customers of Oracle technologies. Traditionally, systems management tools and tech support were separate silos. When problems occur, administrators used internally deployed tools to try to solve the problems themselves. If they couldn't fix the problems, then they would use some sort of support website to get help from the vendor's support staff. Oracle Enterprise Manager 11g integrates problem diagnostic and remediation workflow. Administrators can use Oracle Enterprise Manager's various diagnostic tools to begin the troubleshooting process. They can also use the integrated access to My Oracle Support to look up solutions and download software patches. If further help is needed, administrators can open service requests from right within Oracle Enterprise Manager and track status update. Oracle's support staff, using Enterprise Manager's configuration management capabilities, can collect important configuration information about customer environments in order to expedite problem resolution. This tight integration between Oracle Enterprise Manager and My Oracle Support helps Oracle customers achieve a Superior Ownership Experience for their Oracle products. So there you have it. This is a brief 50,000 feet overview of Oracle Enterprise Manager 11g. We know you are hungry for the details. We are going to write about it in the coming days and weeks. For those of you that absolutely can't wait to find out more, you may download our software to try it out today. In fact, for the first time ever, the initial release of Oracle Enterprise Manager is available for both 32 and 64 bit Linux. Additional O/S ports will arrive in the coming weeks. Please stay tuned on the Oracle Enterprise Manager blog for additional updates.

    Read the article

  • OBIEE 11.1.1 - User Interface (UI) Performance Is Slow With Internet Explorer 8

    - by Ahmed A
    The OBIEE 11g UI is performance is slow in IE 8 and faster in Firefox.  For VPN or WAN users, it takes long time to display links on Dashboards via IE 8. Cause is IE 8 generates many HTTP 304 return calls and this caused the 11g UI slower when compared to the Mozilla FireFox browser. To resolve this issue, you can implement HTTP compression and caching. This is a best practice.Why use Web Server Compression / Caching for OBIEE? Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time. Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times. This screen shot depicts the flow and where the compression and decompression occurs: Solution: a. How to Enable HTTP Caching / Compression in Oracle HTTP Server (OHS) 11.1.1.x 1. To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to "OBIEE Enterprise Deployment Guide for Oracle Business Intelligence" document for details). 2. On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing. This file is located in the OHS installation directory.For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs13. In httpd.conf file, verify that the following directives are included and not commented out: LoadModule expires_module "${ORACLE_HOME}/ohs/modules/mod_expires.soLoadModule deflate_module "${ORACLE_HOME}/ohs/modules/mod_deflate.so 4. Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS: Note: For the Windows platform, you will need to enclose any paths in double quotes ("), for example:Alias "/analytics ORACLE_HOME/bifoundation/web/app"<Directory "ORACLE_HOME/bifoundation/web/app"> Alias /analytics ORACLE_HOME/bifoundation/web/app#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path<Directory ORACLE_HOME/bifoundation/web/app>#We don't generate proper cross server ETags so disable themFileETag noneSetOutputFilter DEFLATE# Don't compress imagesSetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary<FilesMatch "\.(gif|jpeg|png|js|x-javascript|javascript|css)$">#Enable future expiry of static filesExpiresActive onExpiresDefault "access plus 1 week"     #1 week, this will stops the HTTP304 calls i.e. generated by IE 8Header set Cache-Control "max-age=604800"</FilesMatch>DirectoryIndex default.jsp</Directory>#Restrict access to WEB-INF<Location /analytics/WEB-INF>Order Allow,DenyDeny from all</Location> Note: Make sure you replace above placeholder "ORACLE_HOME" to your correct path for BI ORACLE_HOME.For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app Important Notes: Above caching rules restricted to static files found inside the /analytics directory(/web/app). This approach is safer instead of setting static file caching globally. In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content. If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance. The following screen shot summarizes the before and after results and improvements after enabling compression and caching:

    Read the article

  • Grouping data in LINQ with the help of group keyword

    - by vik20000in
    While working with any kind of advanced query grouping is a very important factor. Grouping helps in executing special function like sum, max average etc to be performed on certain groups of data inside the date result set. Grouping is done with the help of the Group method. Below is an example of the basic group functionality.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };         var numberGroups =         from num in numbers         group num by num % 5 into numGroup         select new { Remainder = numGroup.Key, Numbers = numGroup };  In the above example we have grouped the values based on the reminder left over when divided by 5. First we are grouping the values based on the reminder when divided by 5 into the numgroup variable.  numGroup.Key gives the value of the key on which the grouping has been applied. And the numGroup itself contains all the records that are contained in that group. Below is another example to explain the same. string[] words = { "blueberry", "abacus", "banana", "apple", "cheese" };         var wordGroups =         from num in words         group num by num[0] into grp         select new { FirstLetter = grp.Key, Words = grp }; In the above example we are grouping the value with the first character of the string (num[0]). Just like the order operator the group by clause also allows us to write our own logic for the Equal comparison (That means we can group Item by ignoring case also by writing out own implementation). For this we need to pass an object that implements the IEqualityComparer<string> interface. Below is an example. public class AnagramEqualityComparer : IEqualityComparer<string> {     public bool Equals(string x, string y) {         return getCanonicalString(x) == getCanonicalString(y);     }      public int GetHashCode(string obj) {         return getCanonicalString(obj).GetHashCode();     }         private string getCanonicalString(string word) {         char[] wordChars = word.ToCharArray();         Array.Sort<char>(wordChars);         return new string(wordChars);     } }  string[] anagrams = {"from   ", " salt", " earn", "  last   ", " near "}; var orderGroups = anagrams.GroupBy(w => w.Trim(), new AnagramEqualityComparer()); Vikram  

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • Extracting the Date from a DateTime in Entity Framework 4 and LINQ

    - by Ken Cox [MVP]
    In my current ASP.NET 4 project, I’m displaying dates in a GridDateTimeColumn of Telerik’s ASP.NET Radgrid control. I don’t care about the time stuff, so my DataFormatString shows only the date bits: <telerik:GridDateTimeColumn FilterControlWidth="100px"   DataField="DateCreated" HeaderText="Created"    SortExpression="DateCreated" ReadOnly="True"    UniqueName="DateCreated" PickerType="DatePicker"    DataFormatString="{0:dd MMM yy}"> My problem was that I couldn’t get the built-in column filtering (it uses Telerik’s DatePicker control) to behave.  The DatePicker assumes that the time is 00:00:00 but the data would have times like 09:22:21. So, when you select a date and apply the EqualTo filter, you get no results. You would get results if all the time portions were 00:00:00. In essence, I wanted my Entity Framework query to give the DatePicker what it wanted… a Date without the Time portion. Fortunately, EF4 provides the TruncateTime  function. After you include Imports System.Data.Objects.EntityFunctions You’ll find that your EF queries will accept the TruncateTime function. Here’s my routine: Protected Sub RadGrid1_NeedDataSource _     (ByVal source As Object, _      ByVal e As Telerik.Web.UI.GridNeedDataSourceEventArgs) _     Handles RadGrid1.NeedDataSource     Dim ent As New OfficeBookDBEntities1     Dim TopBOMs = From t In ent.TopBom, i In ent.Items _                   Where t.BusActivityID = busActivityID _       And i.BusActivityID And t.ItemID = i.RecordID _       Order By t.DateUpdated Descending _       Select New With {.TopBomID = t.TopBomID, .ItemID = t.ItemID, _                        .PartNumber = i.PartNumber, _                        .Description = i.Description, .Notes = t.Notes, _                        .DateCreated = TruncateTime(t.DateCreated), _                        .DateUpdated = TruncateTime(t.DateUpdated)}     RadGrid1.DataSource = TopBOMs End Sub Now when I select March 14, 2011 on the DatePicker, the filter doesn’t stumble on time values that don’t make sense. Full Disclosure: Telerik gives me (and other developer MVPs) free copies of their suite.

    Read the article

  • How do NTP Servers Manage to Stay so Accurate?

    - by Akemi Iwaya
    Many of us have had the occasional problem with our computers and other devices retaining accurate time settings, but a quick sync with an NTP server makes all well again. But if our own devices can lose accuracy, how do NTP servers manage to stay so accurate? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Photo courtesy of LEOL30 (Flickr). The Question SuperUser reader Frank Thornton wants to know how NTP servers are able to remain so accurate: I have noticed that on my servers and other machines, the clocks always drift so that they have to sync up to remain accurate. How do the NTP server clocks keep from drifting and always remain so accurate? How do the NTP servers manage to remain so accurate? The Answer SuperUser contributor Michael Kjorling has the answer for us: NTP servers rely on highly accurate clocks for precision timekeeping. A common time source for central NTP servers are atomic clocks, or GPS receivers (remember that GPS satellites have atomic clocks onboard). These clocks are defined as accurate since they provide a highly exact time reference. There is nothing magical about GPS or atomic clocks that make them tell you exactly what time it is. Because of how atomic clocks work, they are simply very good at, having once been told what time it is, keeping accurate time (since the second is defined in terms of atomic effects). In fact, it is worth noting that GPS time is distinct from the UTC that we are more used to seeing. These atomic clocks are in turn synchronized against International Atomic Time or TAI in order to not only accurately tell the passage of time, but also the time. Once you have an exact time on one system connected to a network like the Internet, it is a matter of protocol engineering enabling transfer of precise times between hosts over an unreliable network. In this regard a Stratum 2 (or farther from the actual time source) NTP server is no different from your desktop system syncing against a set of NTP servers. By the time you have a few accurate times (as obtained from NTP servers or elsewhere) and know the rate of advancement of your local clock (which is easy to determine), you can calculate your local clock’s drift rate relative to the “believed accurate” passage of time. Once locked in, this value can then be used to continuously adjust the local clock to make it report values very close to the accurate passage of time, even if the local real-time clock itself is highly inaccurate. As long as your local clock is not highly erratic, this should allow keeping accurate time for some time even if your upstream time source becomes unavailable for any reason. Some NTP client implementations (probably most ntpd daemon or system service implementations) do this, and others (like ntpd’s companion ntpdate which simply sets the clock once) do not. This is commonly referred to as a drift file because it persistently stores a measure of clock drift, but strictly speaking it does not have to be stored as a specific file on disk. In NTP, Stratum 0 is by definition an accurate time source. Stratum 1 is a system that uses a Stratum 0 time source as its time source (and is thus slightly less accurate than the Stratum 0 time source). Stratum 2 again is slightly less accurate than Stratum 1 because it is syncing its time against the Stratum 1 source and so on. In practice, this loss of accuracy is so small that it is completely negligible in all but the most extreme of cases. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28

    - by pinaldave
    The key Dynamic Management View (DMV) that helps us to understand wait stats is sys.dm_os_wait_stats; this DMV gives us all the information that we need to know regarding wait stats. However, the interpretation is left to us. This is a challenge as understanding wait stats can often be quite tricky. Anyway, we will cover few wait stats in one of the future articles. Today we will go over the basic understanding of the DMV. The Official Book OnLine Reference for DMV is over here: sys.dm_os_wait_stats. I suggest you all to refer this for all the accuracy. Following is a statement from the online book: “Specific types of wait times during query execution can indicate bottlenecks or stall points within the query. Similarly, high wait times, or wait counts server wide can indicate bottlenecks or hot spots in interaction query interactions within the server instance.” This is the statement which has inspired me to write this series. Let us first run the following statement from DMV. SELECT * FROM sys.dm_os_wait_stats ORDER BY wait_time_ms DESC GO Above statement will show us few of the columns. Here it is quick explanation of each of the column. wait_type – this is the name of the wait type. There can be three different kinds of wait types – resource, queue and external. waiting_tasks_count – this incremental counter is a good indication of frequent the wait is happening. If this number is very high, it is good indication for us to investigate that particular wait type. It is quite possible that the wait time is considerably low, but the frequency of the wait is much high. wait_time_ms – this is total wait accumulated for any type of wait. This is the total wait time and includes singal_wait_time_ms. max_wait_time_ms – this indicates the maximum wait type ever occurred for that particular wait type. Using this, one can estimate the intensity of the wait type in past. Again, it is not necessary that this max wait time will occur every time; so do not over invest yourself here. signal_wait_time_ms – this is the wait time when thread is marked as runnable and it gets to the running state. If the runnable queue is very long, you will find that this wait time becomes high. Additionally, please note that this DMV does not show current wait type or wait stats. This is cumulative view of the all the wait stats since server (instance) restarted or wait stats have been cleared. In future blog post, we will also cover two more DMVs which can be helpful to identify wait-related issues. ?sys.dm_os_waiting_tasks sys.dm_exec_requests Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • Last GUID used up - new ScottGuID unique ID to replace it

    - by Eilon
    You might have heard in recent news that the last ever GUID was used up. The GUID {FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF} was just consumed by a soon to be released project at Microsoft. Immediately after the GUID's creation the word spread around the Microsoft campuses around the globe. Microsoft's approximately 100,000 worldwide employees then started blogging, tweeting, and facebooking about the dubious "achievement." The following screenshot shows GUIDGEN (the Windows tool for creating GUIDs) with the last ever GUID. All GUIDs created by projects at Microsoft must be registered in a central repository for record keeping. This allows quick-fix engineers, security engineers, anti-malware developers, and testers to do a quick look up of an unknown GUID and find out if it belongs to Microsoft. The following screenshot shows the Microsoft GUID Tracker internal application and the last few GUIDs being used up by various Microsoft projects. What is perhaps more interesting than the news about the GUID is the project that used that last GUID. The recent announcements regarding the development experience for the Windows Phone 7 Series (WP7S) all involve free editions of Visual Studio 2010. One of the lesser known developer tools is based on a resurrected project that many of you are probably familiar with, but have never used. The tool is in fact Microsoft Bob 7 Series (MB7S). MB7S is an agent-based approach for mobile phone app development. The UI incorporates both natural language interfaces and motion gesture behaviors, similar to the Windows Phone 7 Series “Metro” interface. If it works, it will help to expand the breadth of mobile app developers. After the GUID: The ScottGuID It came as no big surprise that eventually the last GUID would be used up. Knowing this, a group of engineers at Microsoft has designed, implemented, and tested a replacement to the GUID: The ScottGuID. There are several core principles of the ScottGuID: 1. The concepts used in ScottGuIDs must be easily understood by a developer who is already familiar with GUIDs 2. There must exist a compatibility layer between ScottGuIDs and GUIDs 3. A ScottGuID must be usable in a practical manner in non-computing environments 4. There must exist ScottGuID APIs for all common platforms: Win32/Win64/WinCE, .NET (incl. Silverlight), Linux, FreeBSD, MacOS (incl. iPhone OS), Symbian, RIM BlackBerry, Google Android, etc. 5. ScottGuIDs must never run out ScottGuID use cases One of the more subtle principles of the ScottGuID is principle #3. While technically a GUID could be used in any environment, it was not practical to do so in terms of data entry and error detection. In order to have the ScottGuID be a true universal ID it must be usable in non-computing environments. Prior to the announcement of the ScottGuID there have been a number of until-now confidential projects. One of the tools that will soon become public is ScottGuIDGen, which is in essence an updated version of GUIDGEN that can create ScottGuIDs. The following screenshot shows a sample ScottGuID. To demonstrate the various applications of the ScottGuID there were test deployments around the globe. The following examples are a small showcase of the applications that have already been prototyped. Log in to Hotmail: Pay for gas: Sign in to Twitter: Dispense cat food: Conclusion I hope that this brief introduction to the ScottGuID shows how technology can continue to move forward, even when it appears there is a point that cannot be passed. With a small number of principles, a team of smart engineers, and a passion for "getting it right" the ScottGuID should last well past our lifetimes. In the coming months expect further announcements regarding additional developer tools, samples, whitepapers, podcasts, and videos. Please leave a comment on this post if you have any questions about the ScottGuID or what you would like to see us do with it. With ScottGuID, the possibilities are nearly endless and we want to stretch their reach as far as possible.

    Read the article

  • Sun Fire X4800 M2 Posts World Record x86 SPECjEnterprise2010 Result

    - by Brian
    Oracle's Sun Fire X4800 M2 using the Intel Xeon E7-8870 processor and Sun Fire X4470 M2 using the Intel Xeon E7-4870 processor, produced a world record single application server SPECjEnterprise2010 benchmark result of 27,150.05 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server ran the application tier and the Sun Fire X4470 M2 server was used for the database tier. The Sun Fire X4800 M2 server demonstrated 63% better performance compared to IBM P780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server demonstrated 4% better performance than the Cisco UCS B440 M2 result, both results used the same number of processors. This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_02, and Oracle Database 11g. This result was produced using Oracle Linux. Performance Landscape Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below compares against the best results from IBM and Cisco. SPECjEnterprise2010 Performance Chart as of 3/12/2012 Submitter EjOPS* Application Server Database Server Oracle 27,150.05 1x Sun Fire X4800 M2 8x 2.4 GHz Intel Xeon E7-8870 Oracle WebLogic 12c 1x Sun Fire X4470 M2 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) Cisco 26,118.67 2x UCS B440 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle WebLogic 11g (10.3.5) 1x UCS C460 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) IBM 16,646.34 1x IBM Power 780 8x 3.86 GHz POWER 7 WebSphere Application Server V7 1x IBM Power 750 Express 4x 3.55 GHz POWER 7 IBM DB2 9.7 Workgroup Server Edition FP3a * SPECjEnterprise2010 EjOPS, bigger is better. Configuration Summary Application Server: 1 x Sun Fire X4800 M2 8 x 2.4 GHz Intel Xeon processor E7-8870 256 GB memory 4 x 10 GbE NIC 2 x FC HBA Oracle Linux 5 Update 6 Oracle WebLogic Server 11g Release 1 (10.3.5) Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_02 (Java SE 7 Update 2) Database Server: 1 x Sun Fire X4470 M2 4 x 2.4 GHz Intel Xeon E7-4870 512 GB memory 4 x 10 GbE NIC 2 x FC HBA 2 x Sun StorageTek 2540 M2 4 x Sun Fire X4270 M2 4 x Sun Storage F5100 Flash Array Oracle Linux 5 Update 6 Oracle Database 11g Enterprise Edition Release 11.2.0.2 Benchmark Description SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems. The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans. The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network. The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark. Key Points and Best Practices Sixteen Oracle WebLogic server instances were started using numactl, binding 2 instances per chip. Eight Oracle database listener processes were started, binding 2 instances per chip using taskset. Additional tuning information is in the report at http://spec.org. See Also Oracle Press Release -- SPECjEnterprise2010 Results Page Sun Fire X4800 M2 Server oracle.com OTN Sun Fire X4270 M2 Server oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Oracle Linux oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Disclosure Statement SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Fire X4800 M2, 27,150.05 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 3/27/2012.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014

    - by Kathryn Perry
    A Guest Post by Laura Vogel, Director, Oracle Marketing Cloud Events (pictured left) Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee SessionDell’s Hayden Mugford will speak on "The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party!We’re hosting CX Central Fest: a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are integrated across CX Target AudienceSession content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. OutcomesCustomers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways Resources At a Glance Register Now Track Site—View Marketing Sessions 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Focus on Session Doc Downloadable Justification Email OpenWorld is a fabulous way for you to see all that Oracle Marketing Cloud has to offer. Register today.

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >