Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 452/1944 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • KVM vs Hyper-V. Which hypervisor is best for windows guests?

    - by user198851
    I am currently testing openstack for windows guests (XP and 7). I have deployed openstack "all in one" on system with following specs Processor corei5. (4 physical cores and 8 Threads with HT Technology) RAM 8 GB. HD 500 GB. I have created 4 windows xp guests with 512MB RAM and 1VCPU. On each windows guest i have installed visual studio 2008 only. In nova.conf CPU Over-Commit ratio is 2 for better performance (as mentioned in openstack operation guide). Using KVM as hyerpvisor. I have observed poor performance when simultaneously using visual studio in four windows instances. How i can improve performance ? Should i use KVM or Hyper-V ? or any other suggestion ?

    Read the article

  • OS Isolation: Virtualization or Dual-Boot Duplication, a General How To?

    - by Mr_CryptoPrime
    I want to isolate my windows 7 operating system and I have looked into virtualization. This should work with Linux, however, I do want to still have a way to run windows 7 securely, but without significant performance loss, thus eliminating virtualization for that. I know that you can dual boot because I currently do so with my XP/Linux system. Is there a way that I can duplicate my windows 7 system so I can select one at bootup? This way I can ensure that each OS is isolated and not worry about performance loss. However, I am having a lot of trouble finding a solid method for OS duplication?! Is this even possible or must I buy two versions of win7 and somehow install them separately? Any information regarding this would be helpful, thanks! Essentially I want, Two instances of win7 (not necessarily simultaneously running) Each are isolated from one another so that a security breach in one doesn't affect the other. There is no performance loss in either from doing so

    Read the article

  • How to Remove Header in LaTex

    - by Tim
    Hi, I would like to not show the name of each chapter on the header of its each page. I also like to have nothing in the headers for abstract, acknowledgement, table of content, list of figures and list of tables. But currently I have header on each page, for example: Here is my code when specifying these parts in my tex file \begin{abstract} ... \end{abstract} \begin{acknowledgement} ... \end{acknowledgement} % generate table of contents \tableofcontents % generate list of tables \listoftables % generate list of figures \listoffigures \chapter{Introduction} \label{chp1} %% REFERENCES \appendix \input{appendiximages.tex} \bibliographystyle{plain} %%\bibliographystyle{abbrvnat} \bibliography{thesis} Here is what I believe to be the definition of the commands. More details can be found in these two files jhu12.clo and thesis.cls. % \chapter: \def\chaptername{Chapter} % ABSTRACT % MODIFIED to include section name in headers \def\abstract{ \newpage \dsp \chapter*{\abstractname\@mkboth{\uppercase{\abstractname}}{\uppercase{\abstractname}}} \fmfont \vspace{8pt} \addcontentsline{toc}{chapter}{\abstractname} } \def\endabstract{\par\vfil\null} % DEDICATION % Modified to make dedication its own section %\newenvironment{dedication} %{\begin{alwayssingle}} %{\end{alwayssingle}} \def\dedication{ \newpage \dsp \chapter*{Dedication\@mkboth{DEDICATION}{DEDICATION}} \fmfont} \def\endacknowledgement{\par\vfil\null} % ACKNOWLEDGEMENTS % MODIFIED to include section name in headers \def\acknowledgement{ \newpage \dsp \chapter*{\acknowledgename\@mkboth{\uppercase{\acknowledgename}}{\uppercase{\acknowledgename}}} \fmfont \addcontentsline{toc}{chapter}{\acknowledgename} } \def\endacknowledgement{\par\vfil\null} \def\thechapter {\arabic{chapter}} % \@chapapp is initially defined to be '\chaptername'. The \appendix % command redefines it to be '\appendixname'. % \def\@chapapp{\chaptername} \def\chapter{ \clearpage \thispagestyle{plain} \if@twocolumn % IF two-column style \onecolumn % THEN \onecolumn \@tempswatrue % @tempswa := true \else \@tempswafalse % ELSE @tempswa := false \fi \dsp % double spacing \secdef\@chapter\@schapter} % TABLEOFCONTENTS % In ucthesis style, \tableofcontents, \listoffigures, etc. are always % set in single-column style. @restonecol \def\tableofcontents{\@restonecolfalse \if@twocolumn\@restonecoltrue\onecolumn\fi %%%%% take care of getting page number in right spot %%%%% \clearpage % starts new page \thispagestyle{botcenter} % Page style of frontmatter is botcenter \global\@topnum\z@ % Prevents figures from going at top of page %%%%% \@schapter{\contentsname \@mkboth{\uppercase{\contentsname}}{\uppercase{\contentsname}}}% {\ssp\@starttoc{toc}}\if@restonecol\twocolumn\fi} \def\l@part#1#2{\addpenalty{-\@highpenalty}% \addvspace{2.25em plus\p@}% space above part line \begingroup \@tempdima 3em % width of box holding part number, used by \parindent \z@ \rightskip \@pnumwidth %% \numberline \parfillskip -\@pnumwidth {\large \bfseries % set line in \large boldface \leavevmode % TeX command to enter horizontal mode. #1\hfil \hbox to\@pnumwidth{\hss #2}}\par \nobreak % Never break after part entry \global\@nobreaktrue %% Added 24 May 89 as \everypar{\global\@nobreakfalse\everypar{}}%% suggested by %% Jerry Leichter \endgroup} % LIST OF FIGURES % % Single-space list of figures, add it to the table of contents. \def\listoffigures{\@restonecolfalse \if@twocolumn\@restonecoltrue\onecolumn\fi %%%%% take care of getting page number in right spot %%%%% \clearpage \thispagestyle{botcenter} % Page style of frontmatter is botcenter \global\@topnum\z@ % Prevents figures from going at top of page. \@schapter{\listfigurename\@mkboth{\uppercase{\listfigurename}}% {\uppercase{\listfigurename}}} \addcontentsline{toc}{chapter}{\listfigurename} {\ssp\@starttoc{lof}}\if@restonecol\twocolumn\fi} \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} % bibliography \def\thebibliography#1{\chapter*{\bibname\@mkboth {\uppercase{\bibname}}{\uppercase{\bibname}}} \addcontentsline{toc}{chapter}{\bibname} \list{\@biblabel{\arabic{enumiv}}}{\settowidth\labelwidth{\@biblabel{#1}}% \leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumiv}% \let\p@enumiv\@empty \def\theenumiv{\arabic{enumiv}}}% \def\newblock{\hskip .11em plus.33em minus.07em}% \sloppy\clubpenalty4000\widowpenalty4000 \sfcode`\.=\@m} Thanks and regards! EDIT: I just replaced \thispagestyle{botcenter} with \thispagestyle{plain}. The latter is said to clear the header (http://en.wikibooks.org/wiki/LaTeX/Page_Layout), but it does not. How shall I do? Thanks!

    Read the article

  • A couple of pattern matching issues with pattern matching in Lua

    - by Josh
    I'm fairly new to lua programming, but I'm also a quick study. I've been working on a weather forecaster for a program that I use, and it's working well, for the most part. Here is what I have so far. (Pay no attention to the zs.stuff. That is program specific and has no bearing on the lua coding.) if not http then http = require("socket.http") end local locale = string.gsub(zs.params(1),"%s+","%%20") local page = http.request("http://www.wunderground.com/cgi-bin/findweather/getForecast?query=" .. locale .. "&wuSelect=WEATHER") local location = string.match(page,'title="([%w%s,]+) RSS"') --print("Gathering weather information for " .. location .. ".") --local windspeed = string.match(page,'<span class="nobr"><span class="b">([%d.]+)</span>&nbsp;mph</span>') --print(windspeed) local condition = string.match(page, '<td class="vaM taC"><img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') --local image = string.match(page, '<img src="http://icons-ecast.wxug.com/i/c/a/(.+).gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') local temperature = string.match(page,'pwsvariable="tempf" english="&deg;F" metric="&deg;C" value="([%d.]+)">') local humidity = string.match(page,'pwsvariable="humidity" english="" metric="" value="(%d+)"') zs.say(location) --zs.say("image ./Images/" .. image .. ".gif") zs.say("<color limegreen>Condition:</color> <color white>" .. condition .. "</color>") zs.say("<color limegreen>Temperature: </color><color white>" .. temperature .. "F</color>") zs.say("<color limegreen>Humidity: </color><color white>" .. humidity .. "%</color>") My main issue is this: I changed the 'condition' and added the 'image' variables to what they are now. Even though the line it's supposed to be matching comes directly from the webpage, it fails to match at all. So I'm wondering what it is I'm missing that's preventing this code from working. If I take out the <td class="vaM taC">< img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" it'll match condition flawlessly. (For whatever reason, I can't get the above line to display correctly, but there is no space between the `< and img) Can anyone point out what is wrong with it? Aside from the pattern matching, I assure you the line is verbatim from the webpage. Another question I had is the ability to match across line breaks. Is there any possible way to do this? The reason why I ask is because on that same page, a few of the things I need to match are broken up on separate lines, and since the actual pattern I'm wanting to match shows up in other places on the page, I need to be able to match across line breaks to get the exact pattern. I appreciate any help in this matter!

    Read the article

  • curl multipart/form-data help

    - by user253530
    Hi am trying to post some data on a website using CURL. The posting process has 3 steps. 1. enter a URL, submit and get to the 2nd step with some fields already completed 2. submit again, after you entered some more data and preview the form. 3. submit the final data. The problem is that after the second step, the form data looks like this POSTDATA =-----------------------------12249266671528 Content-Disposition: form-data; name="title" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme - CineMagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="category" 3 -----------------------------12249266671528 Content-Disposition: form-data; name="tags" filme, programe tv, program cinema -----------------------------12249266671528 Content-Disposition: form-data; name="bodytext" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme -----------------------------12249266671528 Content-Disposition: form-data; name="trackback" -----------------------------12249266671528 Content-Disposition: form-data; name="url" http://cinemagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="phase" 2 -----------------------------12249266671528 Content-Disposition: form-data; name="randkey" 9510520 -----------------------------12249266671528 Content-Disposition: form-data; name="id" 17753 -----------------------------12249266671528-- I am stuck trying to devise an algorithm that will generate this kind of POST data for the second step. Just to mention the URL of the form never changes. It is always: http://www.xxx.com/submit. There is only a hidden input called "phase" that changes according to the step i am currently on (phase = 1, phase = 2, phase = 3). Any help, be it either code, pseudo-code or just guidance would be greatly appreciated. My code so far: function postBlvsocialbookmarkingcom($curl,$vars) { extract($vars); $baseUrl = "http://www.blv-socialbookmarking.com/"; //step 1: login $curl->setRedirect(); $page = $curl->post ($baseUrl.'login.php?return=/index.php', array ('username' => $username, 'password' => $password, 'processlogin' => '1', 'return' => '/index.php')); if ($err = $curl->getError ()) { return $err; } //post step 1---- //get random key $page = $curl->post($baseUrl.'/submit', array()); $randomKey = explode('<input type="hidden" name="randkey" value="',$page); $randKey = explode('"',$randomKey[1]); //------------------------------------- $page = $curl->post($baseUrl.'/submit', array('url'=>$address,'phase'=>'1','randkey'=>$randKey[0],'id'=>'c_1')); if ($err = $curl->getError ()) { return $err; } //echo $page; // //post step 2 $page = $curl->post ($baseUrl.'/submit', array ('title' => $title, 'category'=>'1', 'tags' => $tags, 'bodytext' => $description, 'phase' => '2')); if ($err = $curl->getError ()) { return $err; } echo $page; //post step 3 $page = $curl->post ($baseUrl.'/submit', array ('phase' => '3')); if ($err = $curl->getError ()) { return $err; } echo $page; }

    Read the article

  • LUA: A couple of pattern matching issues

    - by Josh
    I'm fairly new to lua programming, but I'm also a quick study. I've been working on a weather forecaster for a program that I use, and it's working well, for the most part. Here is what I have so far. (Pay no attention to the zs.stuff. That is program specific and has no bearing on the lua coding.) if not http then http = require("socket.http") end local locale = string.gsub(zs.params(1),"%s+","%%20") local page = http.request("http://www.wunderground.com/cgi-bin/findweather/getForecast?query=" .. locale .. "&wuSelect=WEATHER") local location = string.match(page,'title="([%w%s,]+) RSS"') --print("Gathering weather information for " .. location .. ".") --local windspeed = string.match(page,'<span class="nobr"><span class="b">([%d.]+)</span>&nbsp;mph</span>') --print(windspeed) local condition = string.match(page, '<td class="vaM taC"><img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') --local image = string.match(page, '<img src="http://icons-ecast.wxug.com/i/c/a/(.+).gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') local temperature = string.match(page,'pwsvariable="tempf" english="&deg;F" metric="&deg;C" value="([%d.]+)">') local humidity = string.match(page,'pwsvariable="humidity" english="" metric="" value="(%d+)"') zs.say(location) --zs.say("image ./Images/" .. image .. ".gif") zs.say("<color limegreen>Condition:</color> <color white>" .. condition .. "</color>") zs.say("<color limegreen>Temperature: </color><color white>" .. temperature .. "F</color>") zs.say("<color limegreen>Humidity: </color><color white>" .. humidity .. "%</color>") My main issue is this: I changed the 'condition' and added the 'image' variables to what they are now. Even though the line it's supposed to be matching comes directly from the webpage, it fails to match at all. So I'm wondering what it is I'm missing that's preventing this code from working. If I take out the <td class="vaM taC">< img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" it'll match condition flawlessly. (For whatever reason, I can't get the above line to display correctly, but there is no space between the `< and img) Can anyone point out what is wrong with it? Aside from the pattern matching, I assure you the line is verbatim from the webpage. Another question I had is the ability to match across line breaks. Is there any possible way to do this? The reason why I ask is because on that same page, a few of the things I need to match are broken up on separate lines, and since the actual pattern I'm wanting to match shows up in other places on the page, I need to be able to match across line breaks to get the exact pattern. I appreciate any help in this matter!

    Read the article

  • Dynamic 'twitter style' urls with ASP.NET

    - by Desiny
    I am looking to produce an MVC site which has complete control of the url structure using routing. The specific requirements are: www.mysite.com/ = homepage (home controller) www.mysite.com/common/about = content page (common controller) www.mysite.com/common/contact = content page (common controller) www.mysite.com/john = twitter style user page (dynamic controller) www.mysite.com/sarah = twitter style user page (dynamic controller) www.mysite.com/me = premium style user page (premium controller) www.mysite.com/oldpage.html = 301 redirect to new page www.mysite.com/oldpage.asp?id=3333 = 301 redirect to new page My routes look as follows: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Common", "common/{action}/{id}", new { controller = "common", action = "Index", id = "" } ); routes.MapRoute( "Home", "", new { controller = "Home", action = "Index", id = "" } ); routes.MapRoute( "Dynamic", "{id}", new { controller = "dynamic", action = "Index", id = "" } ); In order to handle the 301 rredirct, I have a database defining the old pages and their new page urls and a stored procdure to handle the lookup. The code (handler) looks like this: public class AspxCatchHandler : IHttpHandler, IRequiresSessionState { #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { if (context.Request.Url.AbsolutePath.Contains("aspx") && !context.Request.Url.AbsolutePath.ToLower().Contains("default.aspx")) { string strurl = context.Request.Url.PathAndQuery.ToString(); string chrAction = ""; string chrDest = ""; try { DataTable dtRedirect = SqlFactory.Execute( ConfigurationManager.ConnectionStrings["emptum"].ConnectionString, "spGetRedirectAction", new SqlParameter[] { new SqlParameter("@chrURL", strurl) }, true); chrAction = dtRedirect.Rows[0]["chrAction"].ToString(); chrDest = dtRedirect.Rows[0]["chrDest"].ToString(); chrDest = context.Request.Url.Host.ToString() + "/" + chrDest; chrDest = "http://" + chrDest; if (string.IsNullOrEmpty(strurl)) context.Response.Redirect("~/"); } catch { chrDest = "/";// context.Request.Url.Host.ToString(); } context.Response.Clear(); context.Response.Status = "301 Moved Permanently"; context.Response.AddHeader("Location", chrDest); context.Response.End(); } else { string originalPath = context.Request.Path; HttpContext.Current.RewritePath("/", false); IHttpHandler httpHandler = new MvcHttpHandler(); httpHandler.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); } } #endregion } It is very simple to look up a user and in fact the above code does this. My problem is in the dynamic / premium part. I am trying to do the following: 1) in the dynamic controller, lookup the username. 2) if the username is in the user list (database), show the Index ActionResult of the Dynamic controller. 3) if the username is not found, look up the username in the premium list 4) if the username is fund in the premium list (database) then show the Index ActionResult of the Preium controller. 5) If all else fails jump to the 404 page (which will ask the user to sign up) Is this possible? Looking up the user twice is a bad idea for performance? How do I do this without redirecting?

    Read the article

  • Is it possible to reference a linkbotton outside an update panel as the update trigger?

    - by Selase
    I have a page based on a master page and as such i can only see the content place holders i used in the master page showing up in the aspx pages based on the master page. the source code shown below: <%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="CaseAdmin.aspx.cs" Inherits="Prototype4.CaseAdmin" %> <%@PreviousPageType VirtualPath="~/Account/Login.aspx"%> <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> </asp:Content> <asp:Content ID="CaseRightNews" ContentPlaceHolderID="RightNewsItem" runat="server"> </asp:Content> <asp:Content ID="CaseLeftNav" ContentPlaceHolderID="LeftNavigation" runat="server"> <div style="margin-top:20px; margin-bottom:20px;"> <p class="actionButton"> <asp:LinkButton ID="OpenCaseLinkButton" runat="server" onclick="OpenCaseLinkButton_Click">Open Case</asp:LinkButton> </p> <p class="actionButton"><asp:LinkButton ID="RegisterExhibitLinkButton" runat="server" onclick="RegisterExhibitLinkButton_Click">Register Exhibit</asp:LinkButton> </p> </div> </asp:Content> <asp:Content ID="CaseMainContnt" ContentPlaceHolderID="MainContent" runat="server"> <asp:ScriptManager ID="ScriptManager" runat="server" /> <asp:UpdatePanel ID="CaseMainCntntUpdatePanel" UpdateMode="Conditional" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="" eventname="Click"/> </Triggers> <ContentTemplate> <%--Some text here to inform user to click on the open case botton to display open case form--%> </ContentTemplate> </asp:UpdatePanel> <asp:UpdatePanel runat="server" id="UpdatePanel1" updatemode="Conditional"> <Triggers> <asp:AsyncPostBackTrigger ControlID="" eventname="Click"/> </Triggers> <ContentTemplate> <%--some text here to inform users to click on the add exhibit botton to display add exhibit form--%> </ContentTemplate> </asp:UpdatePanel> </asp:Content> the section of the entire page i wish to change upon update is the (this is the main content of the page). for this reason i placed the updatepanel inside the content place holder since it cant be sitting outside and not wrapped in a content place holder. However, the buttons that i wish to apply the trigger that fires the update to, are in another content place holder(). How can i possibly get those buttons to act as the trigger while changing only what appears in the main content area. Plus, i tried getting the updatepanel to work just so i could see if it does the update well but it turned out really bad. i added some linkbottons in the content template area and used them as the triggers for testing reasons. i tested and the changes took over the entire page in contrast to just appearing in the content area. I actually just wanted to load a form that is created in another asp. page into the main content area... I seriously need help with this... Every little help, detail and information is dearly appreciated... thanks so much in advance

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • .NET Weak Event Handlers – Part II

    - by João Angelo
    On the first part of this article I showed two possible ways to create weak event handlers. One using reflection and the other using a delegate. For this performance analysis we will further differentiate between creating a delegate by providing the type of the listener at compile time (Explicit Delegate) vs creating the delegate with the type of the listener being only obtained at runtime (Implicit Delegate). As expected, the performance between reflection/delegate differ significantly. With the reflection based approach, creating a weak event handler is just storing a MethodInfo reference while with the delegate based approach there is the need to create the delegate which will be invoked later. So, at creating the weak event handler reflection clearly wins, but what about when the handler is invoked. No surprises there, performing a call through reflection every time a handler is invoked is costly. In conclusion, if you want good performance when creating handlers that only sporadically get triggered use reflection, otherwise use the delegate based approach. The explicit delegate approach always wins against the implicit delegate, but I find the syntax for the latter much more intuitive. // Implicit delegate - The listener type is inferred at runtime from the handler parameter public static EventHandler WrapInDelegateCall(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs>(EventHandler<TArgs> handler) where TArgs : EventArgs; // Explicite delegate - TListener is the type that defines the handler public static EventHandler WrapInDelegateCall<TListener>(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs, TListener>(EventHandler<TArgs> handler) where TArgs : EventArgs;

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

  • CodePlex Daily Summary for Thursday, December 09, 2010

    CodePlex Daily Summary for Thursday, December 09, 2010Popular ReleasesAutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueCslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...Windows Workflow Foundation on Codeplex: WF AppFabric Caching Activity Pack 0.1: This release includes a set of AppFabric Caching Activities that allow you to use Windows Server AppFabric Caching with WF4. Video endpoint.tv - New WF4 Caching Activities for Windows Server AppFabric ActivitiesDataCacheAdd DataCacheGet DataCachePut DataCacheGet DataCacheRemove WaitForCacheBulkNotification WaitForCacheNotification WaitForFailureNotification WaitForItemNotification WaitForRegionNotification Unit TestsUnit tests are included in the source. Be sure to star...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).TweetSharp: TweetSharp v2.0.0.0 - Preview 4: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous fixes and improvements to core engine Twitter API coverage: a...Aura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.MBG Extensions Library: MBG.Extensions_v1.3: MBG.Extensions Collections.CollectionExtensions - AddIfNew - RemoveRange (Moved From ListExtensions to here, where it should have been) Collections.EnumerableExtensions - ToCommaSeparatedList has been replaced by: Join() and ToValueSeparatedList Join is for a single line of values. ToValueSeparatedList is generally for collection and will separate each entity in the collection by a new line character - ToQueue - ToStack Core.ByteExtensions - TripleDESDecrypt Core.DateTimeExtension...myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...New Projects:WinK: WinK Project1000 bornes: This project is the adaptation of the famous French card game 1000 bornes (http://en.wikipedia.org/wiki/Mille_Bornes) There will be 3 types of clients: - Windows Application (WPF) - Internet Application (ASP.NET, Ajax) - Silverlight Application It's developed in C#.AutomaTones: BDSA Project 2010. Team Anders is developing an application that uses automatons to generate music.EIRENE: UnknownFinal: TDD driven analize of avalable tdd frameworks ect.HomeGrown Database Project tools: A set of tools that can be used to deploy Visual Studio SQL Databse and Server Projects. Developed using Visual Basic .Net 4.0mcssolution: no summaryMobile-enabled ASP.NET Web Forms / MVC application samples: Code samples for the whitepaper "Add mobile pages to your ASP.NET Web Forms / MVC application" linked from http://asp.net/mobileMSDI Projects: www.msdi.cnObject TreeView Visualizer: This is a Helper Library for easy Access to Visual a Object to an treeview. Nice feature to display data, if an error happen. One Place To Rule Them All: Desktop system to manage basics system functions in 3d environmant. Optra also provide community support and easy transfer data and setups between varius devices using xmpp protocol and OpenFire jabber server.Performance Data Suite: The Performance Data Suite will help you to monitor, analyze and optimize your server infrastructure. There will be predefined sets of data collections(e.g. MySQL, Apache, IIS) but it will also help you to create collections on your own.Secure Group Communication in AdHoc Networks: Secure Group Communication in AdHoc Networksimweb: simweb - is a research project which own by GCR and all its copyright belong to GCR. You can download the code for reference only but not able to be commercial without a fees.starLiGHT.Engine: starLiGHT.Engine is a set of libraries for indie game developers using XNA. It is in development for some years now as a closed source project. Now I will release some (most) parts as Open Source (dual licensing).UMC? ???? .NET ??? ?? ???? ???: ???(Junil, Um)? ???? .NET ???? ?? ?? ???? ??? ???.University of Ottawa tour for WP7: This is a Windows Phone 7 tour guide app for the University of Ottawa. vutpp for VS2010: C++ UnitTest Gui Addin????: ????

    Read the article

  • SQL SERVER – 2008 – Missing Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify missing indexes on any database. Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to Avg_Estimated_Impact when you are going to create index. The index creation script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Missing Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 dm_mid.database_id AS DatabaseID, dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact, dm_migs.last_user_seek AS Last_User_Seek, OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName], 'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_' ELSE '' END + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','') + ']' + ' ON ' + dm_mid.statement + ' (' + ISNULL (dm_mid.equality_columns,'') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (dm_mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement FROM sys.dm_db_missing_index_groups dm_mig INNER JOIN sys.dm_db_missing_index_group_stats dm_migs ON dm_migs.group_handle = dm_mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details dm_mid ON dm_mig.index_handle = dm_mid.index_handle WHERE dm_mid.database_ID = DB_ID() ORDER BY Avg_Estimated_Impact DESC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

  • Great Blogs About Oracle Solaris 11

    - by Markus Weber
    Now that Oracle Solaris 11 has been released, why not blog about blogs. There is of course a tremendous amount of resource and information available, but valuable insights directly from people actually building the product is priceless. Here's a list of such great blogs. NOTE: If you think we missed some good ones, please let us know in the comments section !  Topic Title Author Top 11 Things My 11 favourite Solaris 11 features Darren Moffat Top 11 Things These are 11 of my favorite things! Mike Gerdts Top 11 Things 11 reason to love Solaris 11     Jim Laurent SysAdmin Resources Solaris 11 Resources for System Administrators Rick Ramsey Overview Oracle Solaris 11: The First Cloud OS Larry Wake Overview What's a "Cloud Operating System"? Harry Foxwell Overview What's New in Oracle Solaris 11 Jeff Victor Try it ! Virtually the fastest way to try Solaris 11 (and Solaris 10 zones) Dave Miner Upgrade Upgrading Solaris 11 Express b151a with support to Solaris 11 Alan Hargreaves IPS The IPS System Repository Tim Foster IPS Building a Solaris 11 repository without network connection Jim Laurent IPS IPS Self-assembly – Part 1: overlays Tim Foster IPS Self assembly – Part 2: multiple packages delivering configuration Tim Foster Security Immutable Zones on Encrypted ZFS Darren Moffat Security User home directory encryption with ZFS Darren Moffat Security Password (PAM) caching for Solaris su - "a la sudo" Darren Moffat Security Completely disabling root logins on Solaris 11 Darren Moffat Security OpenSSL Version in Solaris Darren Moffat Security Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 Valerie Fenwick Performance Critical Threads Optimization Rafael Vanoni Performance SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11 BestPerf Blog Performance Recent Benchmarks Using Oracle Solaris 11 BestPerf Blog Predictive Self Healing Introducing SMF Layers Sean Wilcox Predictive Self Healing Oracle Solaris 11 - New Fault Management Features Gavin Maltby Desktop What's new on the Solaris 11 Desktop? Calum Benson Desktop S11 X11: ye olde window system in today's new operating system Alan Coopersmith Desktop Accessible Oracle Solaris 11 - released! Peter Korn

    Read the article

  • links for 2010-04-13

    - by Bob Rhubart
    Frederic Michiar: Manage a flexible and elastic Data Center with Oracle VM Manager Frederic Michiar shares a list of Oracle VM resources. (tags: otn oracle virtualization) Mona Rakibe: BAM Data Control in multiple ADF Faces Components "When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. " Mona Rakibe shows you how. (tags: oracle otn soa bam adf) Martin Widlake: Performance Tipping Points Martin Widlake offers "a nice example of a performance tipping point. This is where Everything is OK until you reach a point where it all quickly cascades to Not OK." (tags: oracle otn database architecture performance) Steve Chan: EBS Techstack Sessions at OAUG/Collaborate 2010 Steve Chan shares a list of Collaborate 2010 sessions featuring Oracle E-Business Suite Applications Technology Group staffers. (tags: oracle otn collaborate2010 ebs) @ORACLENERD: Developing in APEX Oracle ACE Chet Justice counts the ways... (tags: otn oracle oracleace apex) @bex: Almost Time For IOUG Collaborate 2010 Oracle ACE Director Bex Huff shares details on his Collaborate 2010 presentation, "The Top 10 Things Oracle UCM Customers Need To Know About WebLogic:" (tags: oracle otn oracleace collaborate2010 weblogic ucm enterprise2.0)

    Read the article

  • MySQL Connect Content Catalog Live

    - by Bertrand Matthelié
    The MySQL Connect Content Catalog is now live and you can check out the great program the content committee put together for you. We received a lot of very good submissions during the call for papers and we’d like to thank you all again for those, it was a very difficult job to choose. Overall MySQL Connect will in two days include: Keynotes, with speakers such as Oracle Chief Corporate Architect Edward Screven and Vice President of MySQL Engineering Tomas Ulin 66 conference sessions, enabling you to hear from: Oracle engineers on MySQL 5.6 new features, InnoDB, performance and scalability, security, NoSQL, MySQL Cluster…and more MySQL users and customers including Facebook, Twitter, PayPal, Yahoo, Ticketmaster, and CERN Internationally recognized MySQL community members and partners on topics such as performance, security or high availability 6 Birds-of-a-feather sessions, in which you’ll be able to engage into passionate discussions about replication, backup and other subjects, and help influence the MySQL roadmap 8 Hands-On Labs designed to give you hands-on experience about MySQL replication, MySQL Cluster, the MySQL Performance Schema…and more Demo pods about MySQL Workbench, MySQL Cluster, MySQL Enterprise Edition and other technologies and services We’ll also have networking receptions on both Saturday and Sunday evening, enabling you to discuss with the Oracle engineers developing and supporting the MySQL products, as well as with other users and customers. Additionally, you’ll have the opportunity to meet and learn from our partners in the exhibition hall. Some of the MySQL Connect speakers such as Henrik Ingo and Andrew Morgan have already blogged about their presence at MySQL Connect, and you can find more information about their sessions or their thoughts about the conference in their blogs. We also published an interview with Tomas Ulin a few weeks ago. In summary, don’t miss MySQL Connect! And you only have about 3 weeks left to register with the early bird discount and save US$500. Don’t wait, Register Now! Interested in sponsorship and exhibit opportunities? You will find more information here.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • Java, the Cloud, and Oracle at QCon San Francisco 2011

    - by Bob Rhubart
    If you're part of the lucky bunch attending this week's sold-out QCon San Francisco conference at Westin San Francisco Market Street, I'd like to bring several sessions to your attention. On Wednesday Nov 16, Alex Buckley, specification lead for the Java Language and the Java Virtual Machine at Oracle, will present Java 7 and 8: Where We've Been, Where We're Going, part of the Why is Java still sexy? track. The session begins at 10:35 a.m. in the Olympic room. On Thursday Nov 17, Tyler Jewell, VP Product Management for Oracle's Platform as a Service, will participate in the Performance and Scalability Panel moderated by InfoQ founder and QCon SF Program Committee Member Floyd Marinescu. That panel, part of the Performance and Scalability Solutions track, begins at 10:35 a.m. in the Olympic room. Following that panel discussion, Tyler will fly solo with a presentation on Java EE 7: Developing for the Cloud, also part of the Performance and Scalability Solutions track. That session kicks off at 12:05 p.m., also in the Olympic room. On Friday Nov 18 Tyler will jump tracks, so to speak, when he presents The Architecture of Oracle's Public Cloud, part of the Architecture Case Studies: Cloud track. That session begins at 4:50 p.m. in the Stanford room. Of course, QCon also offers ample meet-and-greet opportunities. One such opportunity happens in the hospitality suite hosted by the Java Community Process Executive Committee. That shindig gets in gear at 5:50 pm on Thursday. Throughout the QCon San Francisco conference, members of the OTN team (including your's truly) and members of the Oracle Fusion Middleware team will be on hand at the OTN booth in the conference lobby. Stop by to say hello, score some swag, and catch a demo or two.

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • CodePlex Daily Summary for Sunday, March 06, 2011

    CodePlex Daily Summary for Sunday, March 06, 2011Popular ReleasesIIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.CRM 2011 OData Query Designer: CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. This tool allows you to build OData queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the query and view the ATOM and JSON data returned. The look and feel of this component will improve and new functionality will be added in the near future so please provide feedback on your experience. Import this solution int...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...DotNetNuke® Community Edition: 05.06.02 Beta: Major HighlightsFixed issue where "My Folder" was not available in the URL control and the Telerik HTML Editor Fixed issue where HTML Editor dialogs were not displaying correctly in alternate languages Fixed issue with Regex for email validation Fixed race condition in the core scheduler Fixed issue where editing Host page settings would result in broken host menu Fixed issue where "Apply to All Modules" setting was not propogating settings correctly. Fixed issue where browser lan...DirectQ: Release 1.8.7 (RC1): Release candidate 1 of 1.8.7GoogleTrail: TrailMap Beta 1: Trailmap beta 1 release Now we have updated custom map builder. Now we have complete gpx file editor. Now we have elevation data update service for any gpx file. (currently supports only google only).Chirpy - VS Add In For Handling Js, Css, DotLess, and T4 Files: Margogype Chirpy (ver 2.0): Chirpy loves Americans. Chirpy hates Americanos.ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...Sandcastle Help File Builder: SHFB v1.9.2.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. NOTE: The included help file and the online help have not been completely updated to reflect all changes in this release. A refresh will be issue...Network Monitor Open Source Parsers: Microsoft Network Monitor Parsers 3.4.2554: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, we have added 4 new protocol parsers and updated 79 existing parsers in the NetworkMonitor_Pa...Image Resizer for Windows: Image Resizer 3 Preview 1: Prepare to have your minds blown. This is the first preview of what will eventually become 39613. There are still a lot of rough edges and plenty of areas still under construction, but for your basic needs, it should be relativly stable. Note: You will need the .NET Framework 4 installed to use this version. Below is a status report of where this release is in terms of the overall goal for version 3. If you're feeling a bit technically ambitious and want to check out some of the features th...JSON Toolkit: JSON Toolkit 1.1: updated GetAllJsonObjects() method and GetAllProperties() methods to JsonObject and Properties propertiesFacebook Graph Toolkit: Facebook Graph Toolkit 1.0: Refer to http://computerbeacon.net for Documentation and Tutorial New features:added FQL support added Expires property to Api object added support for publishing to a user's friend / Facebook Page added support for posting and removing comments on posts added support for adding and removing likes on posts and comments added static methods for Page class added support for Iframe Application Tab of Facebook Page added support for obtaining the user's country, locale and age in If...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager small improvements for some helpers and AjaxDropdown has Data like the Lookup except it's value gets reset and list refilled if any element from data gets changedManaged Extensibility Framework: MEF 2 Preview 3: This release aims .net 4.0 and Silverlight 4.0. Accordingly, there are two solutions files. The assemblies are named System.ComponentModel.Composition.Codeplex.dll as a way to avoid clashing with the version shipped with the 4th version of the framework. Introduced CompositionOptions to container instantiation CompositionOptions.DisableSilentRejection makes MEF throw an exception on composition errors. Useful for diagnostics Support for open generics Support for attribute-less registr...PHPExcel: PHPExcel 1.7.6 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.New Projectsasp.net mvc 3 simple cms: asp.net mvc3 cms for learling purpose.C++ Mini Framework: C++ Mini Framework is a simple and easy to use class library in source format to quickly do things you commonly need to do in native projects with the purpose to get you started specifically targeting new C++ developers hopping you will be productive from the very start.Community Megaphone Helpers: Community Megaphone Helpers is a project intended as a means of sharing and accepting contributions for reusable Razor helper modules for functionality used in the Community Megaphone events web application, including Bing maps and more. Supports Microsoft WebMatrix & MVC 3CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. The tool allows you to build OData REST queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the queryFaceted Search: Implementation of faceted (advanced) search with composite client side UI. Abstraction interfaces for intagration with different server side technologies, implementation for ASP.NET MVC. FileRenamePro: FileRenamePro makes it easier for users to rename files using advanced rules and regular expressions. It's developed in C#.GT5 Mobile: Gran-Turismo Remote Racing mobile site wrapper.KAT: KAT - Knowledge Assessment Tool is a Solution from IndERP in order to automate Performance and Process Management for a Technology/Job Oriented Companies. monopoly game: Monopoly is an open source project for educational purposes. The project will incorporate XNA, Silverlight, WCF technologies. The project will also show good design patterns considerations, and integration into Facebook App. The project will be written in C#.MuDB: MuDB is an embedded object-oriented database for the .NET Micro Framework which provides a simple yet useful interface for managing data.NDataStructure: A library providing a handful of useful data structures omitted from the .NET framework.Set NuGet version number: A simple command line tool that makes it easy to set the version number within a NuGet .nuspec package configuration file. This is useful for when you want to automatically update and publish a NuGet package from your build system.Sightreader: Small application to aid in the wrote learning of basic musical notation.SSAS Operations Templates: SSAS Operations Templates includes SSIS packages, scripts and code samples for automating maintenance of SSAS in a production environment. Includes operations such as backup the current state of cube designs in production, scripting paritition creation, etc.Team Run Log: Team Run LogTEDHelper: Download TED movie's subtitle. ?? TED ?????。testprojectit339: project339Ultimate Resume Repository: A class library and application for storing resumes of multiple people with the ability to export a targeted resume in various, configurable formats. Further additions may include cover letters, browser add-ons to populate applications, job search engine integration, etc...Umbraco: Inspired DataTypes: New datatypes that are not in the default install to make Umbraco have some new controls such as Content/Media Treeview, Content/Media Drop Down List with Treeview. Controls have options to restrict DocumentTypes or MediaType and the start location to retreive fromUsing different schemas in the same Orchestration Receive Port: Using different schemas in the same Orchestration Receive PortWF4Host: Examples in re-hosting Workflow 4 designer.WMP Hotkeys: WMP Hotkeys is a windows media player plugin that enable users to use VLC player like keyboard shortcuts(e.g SPACE to play/pause) in Windows Media Player.

    Read the article

  • Why is HTML/Javascript minification beneficial

    - by Channel72
    Why is HTML/Javascript minification beneficial when the HTTP protocol already supports gzip data compression? I realize that Javascript/HTML minification has the potential to significantly reduce the size of Javascript/HTML files by removing unnecessary whitespace, and perhaps renaming variables to a few letters each, but doesn't the LZW algorithm do especially well when there are many repeated characters (e.g. lots of whitespace?) I realize that some Javascript minification tools do more than just reduce size. Google's closure compiler, for example, also tries to improve code performance by inlining functions and doing other analyses. But the primary purpose of Javascript minification is usually to reduce file size. I also realize there are other reasons you might want to minify aside from performace, such as code obfuscation. But again, that reason is not usually emphasized as much as performance gain and file size reduction. For example, Closure Compiler is not advertised as an obfuscation tool, but as a code size reducer and download-speed enhancer. So, how much performance do you really gain from Javascript/HTML minification when you're already significantly reducing file size with gzip compression?

    Read the article

  • Announcement: DTrace for Oracle Linux General Availability

    - by Zeynep Koch
    Today we are announcing the general availability of DTrace for Oracle Linux. It is available to download from ULN for Oracle Linux Support customers.  DTrace is a comprehensive dynamic tracing framework that was initially developed for the Oracle Solaris operating system, and is now available to Oracle Linux customers. DTrace is designed to give operational insights that allow users to tune and troubleshoot the operating system. DTrace provides Oracle Linux developers with a tool to analyze performance, and increase observability into the systems they own to see how they work. DTrace enables higher quality applications development, reduced downtime, lower cost, and greater utilization of existing resources. Key benefits and features of DTrace on Oracle Linux include: • Designed to work on finding performance bottlenecks • Dynamically enables the kernel with a number of probe points, improving ability to service software • Enables maximum resource utilization and application performance • Fast and easy to use, even on complex systems with multiple layers of software If you already have Oracle Linux support, you can download DTrace from ULN channel. We have a dedicated Forum for DTrace on Oracle Linux, to discuss your experience and questions.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >