Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 62/530 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • SQL SERVER – Index Created on View not Used Often – Observation of the View – Part 2

    - by pinaldave
    Earlier, I have written an article about SQL SERVER – Index Created on View not Used Often – Observation of the View. I received an email from one of the readers, asking if there would no problems when we create the Index on the base table. Well, we need to discuss this situation in two different cases. Before proceeding to the discussion, I strongly suggest you read my earlier articles. To avoid the duplication, I am not going to repeat the code and explanation over here. In all the earlier cases, I have explained in detail how Index created on the View is not utilized. SQL SERVER – Index Created on View not Used Often – Limitation of the View 12 SQL SERVER – Index Created on View not Used Often – Observation of the View SQL SERVER – Indexed View always Use Index on Table As per earlier blog posts, so far we have done the following: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View However, the blog reader who emailed me suggests the extension of the said logic, which is as follows: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View Create Index on the Base Table Write SELECT with ORDER BY on View After doing the last two steps, the question is “Will the query on the View utilize the Index on the View, or will it still use the Index of the base table?“ Let us first run the Create example. USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO -- Create Index on Original Table -- On Column ID1 CREATE UNIQUE CLUSTERED INDEX [IX_OriginalTable] ON mySampleTable ( ID1 ASC ) GO -- On Column ID2 CREATE UNIQUE NONCLUSTERED INDEX [IX_OriginalTable_ID2] ON mySampleTable ( ID2 ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO Now let us see the execution plans for both of the SELECT statement. Before Index on Base Table (with Index on View): After Index on Base Table (with Index on View): Looking at both executions, it is very clear that with or without, the View is using Indexes. Alright, I have written 11 disadvantages of the Views. Now I have written one case where the View is using Indexes. Anybody who says that I am being harsh on Views can say now that I found one place where Index on View can be helpful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View

    - by pinaldave
    I always enjoy writing about concepts on Views. Views are frequently used concepts, and so it’s not surprising that I have seen so many misconceptions about this subject. To clear such misconceptions, I have previously written the article SQL SERVER – The Limitations of the Views – Eleven and more…. I also wrote a follow up article wherein I demonstrated that without even creating index on the basic table, the query on the View will not use the View. You can read about this demonstration over here: SQL SERVER – Index Created on View not Used Often – Limitation of the View 12. I promised in that post that I would also write an article where I would demonstrate the condition where the Index will be used. I got many responses suggesting that I can do that with using NOEXPAND; I agree. I have already written about this in my original summary article. Here is a way for you to see how Index created on View can be utilized. We will do the following steps on this exercise: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO When we check the execution plan for this , we find it clearly that the Index created on the View is utilized. ORDER BY clause uses the Index created on the View. I hope this makes the puzzle simpler on how the Index is used on the View. Again, I strongly recommend reading my earlier series about the limitations of the Views found here: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • Performance Gains using Indexed Views and Computed Columns

    - by NeilHambly
    Hello This is a quick follow-up blog to the Presention I gave last night @ the London UG Meeting ( 17th March 2010 ) It was a great evening and we had a big full house (over 120 Registered for this event), due to time constraints we had I was unable to spend enough time on this topic to really give it justice or any the myriad of questions that arose form the session, I will be gathering all my material and putting a comprehensive BLOG entry on this topic in the next couple of days.. In the meantime here is the slides from last night if you wanted to again review it or if you where not @ the meeting If you wish to contact me then please feel free to send me emails @ [email protected] Finally  - a quick thanks to Tony Rogerson for allowing me to be a Presenter last night (so we know who we can blame !)  and all the other presenters for thier support Watch this space Folks more to follow soon.. 

    Read the article

  • The JRockit Performance Counters

    - by Marcus Hirt
    Every now and then I get a question regarding what the attributes in the PerfCounters dynamic MBean represent. Now, all the MBeans under the oracle.jrockit.management (bea.jrockit.management pre R28) domain are part of what we call JMXMAPI (the JRockit JMX based Management API), which is unsupported. Therefore there is no official documentation for the API. I did however write a bit about JMXMAPI in my recent JRockit book, Oracle JRockit: The Definitive Guide. The information in the table below is from that book: Counter Description java.cls.loadedClasses The number of classes loaded since the start of the JVM. java.cls.unloadedClasses The number of classes unloaded since the start of the JVM. java.property.java.class.path The class path of the JVM. java.property.java.endorsed.dirs The endorsed dirs. See the Endorsed Standards Override Mechanism. java.property.java.ext.dirs The ext dirs, which are searched for jars that should be automatically put on the classpath. See the Java documentation for java.ext.dirs. java.property.java.home The root of the JDK or JRE installation. java.property.java.library.path The library path used to find user libraries. java.property.java.vm.version The JRockit version. java.rt.vmArgs The list of VM arguments. java.threads.daemon The number of running daemon threads. java.threads.live The total number of running threads. java.threads.livePeak The peak number of threads that has been running since JRockit was started. java.threads.nonDaemon The number of non-daemon threads running. java.threads.started The total number of threads started since the start of JRockit. jrockit.gc.latest.heapSize The current heap size in bytes. jrockit.gc.latest.nurserySize The current nursery size in bytes. jrockit.gc.latest.oc.compaction.time How long, in ticks, the last compaction lasted. Reset to 0 if compaction is skipped. jrockit.gc.latest.oc.heapUsedAfter Used heap at the end of the last OC, in bytes. jrockit.gc.latest.oc.heapUsedBefore Used heap at the start of the last OC, in bytes. jrockit.gc.latest.oc.number The number of OCs that have occurred so far. jrockit.gc.latest.oc.sumOfPauses The paused time for the last OC, in ticks. jrockit.gc.latest.oc.time The time the last OC took, in ticks. jrockit.gc.latest.yc.sumOfPauses The paused time for the last YC, in ticks. jrockit.gc.latest.yc.time The time the last YC took, in ticks. jrockit.gc.max.oc.individualPause The longest OC pause so far, in ticks. jrockit.gc.max.yc.individualPause The longest YC pause so far, in ticks. jrockit.gc.total.oc.compaction.externalAborted Number of aborted external compactions so far. jrockit.gc.total.oc.compaction.internalAborted Number of aborted internal compactions so far. jrockit.gc.total.oc.compaction.internalSkipped Number of skipped internal compactions so far. jrockit.gc.total.oc.compaction.time The total time spent doing compaction so far, in ticks. jrockit.gc.total.oc.ompaction.externalSkipped Number of skipped external compactions so far. jrockit.gc.total.oc.pauseTime The sum of all OC pause times so far, in ticks. jrockit.gc.total.oc.time The total time spent doing OC so far, in ticks. jrockit.gc.total.pageFaults The number of page faults that have occurred during GC so far. jrockit.gc.total.yc.pauseTime The sum of all YC pause times, in ticks. jrockit.gc.total.yc.promotedObjects The number of objects that all YCs have promoted. jrockit.gc.total.yc.promotedSize The total number of bytes that all YCs have promoted, in bytes. jrockit.gc.total.yc.time The total time spent doing YC, in ticks. oracle.ci.jit.count The number of methods JIT compiled. oracle.ci.jit.timeTotal The total time spent JIT compiling, in ticks. oracle.ci.opt.count The number of methods optimized. oracle.ci.opt.timeTotal The total time spent optimizing, in ticks. oracle.rt.counterFrequency Used to convert ticks values to seconds. Note that many of these counters are excellent choices for attributes to plot in the Management Console. Also note that many values are in ticks – to convert them to seconds, divide by the value in the oracle.rt.counterFrequency counter.

    Read the article

  • ASP.NET/mono performance on Linux

    - by Quandary
    Anybody knows how asp.net/mono performance is on Linux ? I mean, which server gives you the best performance/delivery time (Apache/Apache2, xsp2, lighthttp, nginx, other) ? Since all asp.net goes via xsp2, I'd say xsp2 would certainly be fastest, but it's probably missing a lot of features, which lighthttp offers (e.g. mod_dosevasive, URL-rewriting, etc.).

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • Event system architecture for networking when performance is concerned

    - by Vandell
    How should I design a system for an action game (think in Golden Axe) where events can happen remotely? I'm using TCP for this because the client is in flash. There's so many options, I can make a binary protocol (I don't like this idea, I found it to be too hard to mantain) but I was also thinking that passing jsons through clients and server can be slow (Is that a exaggerated concern?). What about the internal architecture for the server? And for the client? I'm really lost, If it's a question that is too big, please indicate me some material so I can formulate a better question next time.

    Read the article

  • OBIEE 11.1.1 - How to configure HTTP compression / caching on Oracle BI Mobile app

    - by Ahmed Awan
     Applies to: OBIEE 11.1.1.5 Supported Physical Devices and OS: The Oracle BI Mobile application with HTTP compression / caching configurations is tested on following devices: iPhone 4S, 4, 3GS. iPad 2 and 1. Note these devices must be running the latest version of the iOS version, i.e. iOS 4.2.1 / iOS 5 is also supported. Configuring Pre-requisites: Prior to configuration, the Oracle Web tier software must be installed on server, as described in product documentation i.e. Enterprise Deployment Guide for Oracle Business Intelligence in Section 3.2, "Installing Oracle HTTP Server." The steps for configuring the compression and caching on Oracle HTTP Server are described in this PA blog at http://blogs.oracle.com/pa/entry/obiee_11g_user_interface_ui and in support Doc ID 1312299.1. Configuration Steps in Oracle BI Mobile application: 1. Download the BI Mobile app from the Apple iTunes App Store. The link is http://itunes.apple.com/us/app/oracle-business-intelligence/id434559909?mt=8 . 2. Add Server for example http://pew801.us.oracle.com:7777/analytics/ , here is how your “Server Setting” screen should look like on your OBI Mobile app:                                 Performance Gain Test (using Oracle® HTTP Server with OBIEE) The test with/without HTTP compression / caching was conducted on iPhone 4S / iPad 2 to measure the throughput (i.e. total bytes received) for Oracle® Business Intelligence Enterprise Edition. Below table shows the throughput comparison before and after using HTTP compression / caching for SampleApp using “QuickStart” dashboard accessing reports i.e. Overview, Details, Published Reporting and Scorecard. Testing shows that total bytes received were reduced from 2.3 MB to 723 KB. a. Test Results > Without HTTP Compression / Caching setting - Total Throughput (in Bytes) captured below: Total Bytes Statistics:        b. Test Results > With HTTP Compression / Caching settings - Total Throughput (in Bytes) captured below: Total Bytes Statistics:      

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • Delegate performance of Roslyn Sept 2012 CTP is impressive

    - by dotneteer
    I wanted to dynamically compile some delegates using Roslyn. I came across this article by Piotr Sowa. The article shows that the delegate compiled with Roslyn CTP was not very fast. Since the article was written using the Roslyn June 2012, I decided to give Sept 2012 CTP a try. There are significant changes in Roslyn Sept 2012 CTP in both C# syntax supported as well as API. I found Anoop Madhisidanan’s article that has an example of the new API. With that, I was able to put together a comparison. In my test, the Roslyn compiled delegate is as fast as C# (VS 2012) compiled delegate. See the source code below and give it a try. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; using Roslyn.Compilers; using Roslyn.Scripting.CSharp; using Roslyn.Scripting; namespace RoslynTest { class Program { public Func del; static void Main(string[] args) { Stopwatch stopWatch = new Stopwatch(); Program p = new Program(); p.SetupDel(); //Comment out this line and uncomment the next line to compare //p.SetupScript(); stopWatch.Start(); int result = DoWork(p.del); stopWatch.Stop(); Console.WriteLine(result); Console.WriteLine("Time elapsed {0}", stopWatch.ElapsedMilliseconds); Console.Read(); } private void SetupDel() { del = (s, i) => ++s; } private void SetupScript() { //Create the script engine //Script engine constructor parameters go changed var engine=new ScriptEngine(); //Let us use engine's Addreference for adding the required //assemblies new[] { typeof (Console).Assembly, typeof (Program).Assembly, typeof (IEnumerable<>).Assembly, typeof (IQueryable).Assembly }.ToList().ForEach(asm => engine.AddReference(asm)); new[] { "System", "System.Linq", "System.Collections", "System.Collections.Generic" }.ToList().ForEach(ns=>engine.ImportNamespace(ns)); //Now, you need to create a session using engine's CreateSession method, //which can be seeded with a host object var session = engine.CreateSession(); var submission = session.CompileSubmission>("new Func((s, i) => ++s)"); del = submission.Execute(); //- See more at: http://www.amazedsaint.com/2012/09/roslyn-september-ctp-2012-overview-api.html#sthash.1VutrWiW.dpuf } private static int DoWork(Func del) { int result = Enumerable.Range(1, 1000000).Aggregate(del); return result; } } }  Since Roslyn Sept 2012 CTP is already over a year old, I cannot wait to see a new version coming out.

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • Roo gem .xlsx files performance problems [closed]

    - by alvaritogf
    I am getting unacceptable performace problems by using the roo gem for reading a file by using XLSX or XLS library from this gem. Someone may suggest me an alternative about how to parse an .XLSX file? parsed_file = Excel.new(filename,false, :ignore) if (file_format.upcase == "XLS") parsed_file = Excelx.new(filename,false, :ignore) if (file_format.upcase == "XLSX") raise t "#{filename} is not an Excel file!" if (!parsed_file) parsed_file.default_sheet = parsed_file.sheets[0]#'Sheet2'#oo.sheets[1] first_row = parsed_file.first_row last_row = parsed_file.last_row first_column = parsed_file.first_column last_column = parsed_file.last_column #logger.info "#### Total Rows:#{last_row}, first_row:#{first_row}, last_row:#{last_row}, first_column:#{first_column}, last_column:#{last_column}" first_row.upto(last_row) do |current_line| # Stuff .... end Thanks

    Read the article

  • SQL SERVER – Three Puzzling Questions – Need Your Answer

    - by pinaldave
    Last week I had asked three questions on my blog. I got very good response to the questions. I am planning to write summary post for each of three questions next week. Before I write summary post and give credit to all the valid answers. I was wondering if I can bring to notice of all of you this week. Why SELECT * throws an error but SELECT COUNT(*) does not This is indeed very interesting question as not quite many realize that this kind of behavior SQL Server demonstrates out of the box. Once you run both the code and read the explanation it totally makes sense why SQL Server is behaving how it is behaving. Also there is connect item is associated with it. Also read the very first comment by Rob Farley it also shares very interesting detail. Statistics are not Updated but are Created Once This puzzle has multiple right answer. I am glad to see many of the correct answer as a comment to this blog post. Statistics are very important and it really helps SQL Server Engine to come up with optimal execution plan. DBA quite often ignore statistics thinking it does not need to be updated, as they are automatically maintained if proper database setting is configured (auto update and auto create). Well, in this question, we have scenario even though auto create and auto update statistics are ON, statistics is not updated. There are multiple solutions but what will be your solution in this case? When to use Function and When to use Stored Procedure This question is rather open ended question – there is no right or wrong answer. Everybody developer has always used functions and stored procedures. Here is the chance to justify when to use Stored Procedure and when to use Functions. I want to acknowledge that they can be used interchangeably but there are few reasons when one should not do that. There are few reasons when one is better than other. Let us discuss this here. Your opinion matters. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by CarylTakvorian-Oracle
    Now that we've just announced our Next Generation Processor at the HotChips HC26 conference , my colleague Angelo Rajadurai has a great write-up on what was announced and what this could mean for our ISV partners, covering in particular the SPARC M7 new Software-in-Silicon features such as Application Data Integrity and the In-Memory Query Accelerator. During the same presentation we also introduced the SPARC Accelerated Program to provide our partners and third party developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program.

    Read the article

  • Today’s Performance Tip: Views are for Convenience, Not Performance!

    - by Jonathan Kehayias
    I tweeted this last week on twitter and got a lot of retweets so I thought that I’d blog the story behind the tweet. Most vendor databases have views in them, and when people want to retrieve data from a database, it seems like the most common first stop they make are the vendor supplied Views.  This post is in no way a bash against the usage or creation of Views in a SQL Server Database, I have created them before to simplify code and compartmentalize commonly required queries so that there...(read more)

    Read the article

  • Performance of pixel shaders vs. SpriteBatch: XNA

    - by ashes999
    Precondition: I read this question/answer about using shaders, or spritebatch, to render and mark a sprite. I need to do something like that. I also have a 2D lighting PoC which I need to write. The way it will work will basically be something like: Draw all the sprites Draw lighting gradients to create a lighting texture Multiply/add the lighting texture to achieve different effects (I use multiple passes of add/multiply the lighting texture to achieve different effects.) My question is really about a generalization: can I say with certainty that pixel shaders are always faster than adding/multiplying textures to the SpriteBatch? Or that adding/multiplying is always faster? Or if it's not generalizable, how do I decide which approach to take, given that I can probably code either of them? (If it matters, I'm using MonoGame 3.0 beta for Windows games)

    Read the article

  • Performance of Google Desktop Search on Windows 7

    - by RexE
    I have read that that installing Google Desktop Search on Vista can slow down the computer because Vista already has a search indexing feature, and adding Google's separate indexing feature results in a performance hit. (Google hints at this in their FAQ here.) Does this problem also exist in Win 7? Is there a workaround that improves performance?

    Read the article

  • Improving performance of fuzzy string matching against a dictionary [closed]

    - by Nathan Harmston
    Hi, So I'm currently working for with using SecondString for fuzzy string matching, where I have a large dictionary to compare to (with each entry in the dictionary has an associated non-unique identifier). I am currently using a hashMap to store this dictionary. When I want to do fuzzy string matching, I first check to see if the string is in the hashMap and then I iterate through all of the other potential keys, calculating the string similarity and storing the k,v pair/s with the highest similarity. Depending on which dictionary I am using this can take a long time ( 12330 - 1800035 entries ). Is there any way to speed this up or make it faster? I am currently writing a memoization function/table as a way of speeding this up, but can anyone else think of a better way to improve the speed of this? Maybe a different structure or something else I'm missing. Many thanks in advance, Nathan

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • ASP.Net 4.5 Garbage Collection Improvement

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/24/asp.net-4.5-garbage-collection-improvement.aspxI just read Five Great .NET Framework 4.5 Features on CodeProject by Shivprasad koirala. Feature 5 in his article mentions the GC background cleanup and has a good explanation of the work the GC has to do for ASP.Net on the server. “Garbage collector is one real heavy task in a .NET application. And it becomes heavier when it is an ASP.NET application. ASP.NET applications run on the server and a lot of clients send requests to the server thus creating loads of objects, making the GC really work hard for cleaning up unwanted objects.” “To overcome the above problem, server GC was introduced. In server GC there is one more thread created which runs in the background. This thread works in the background and keeps cleaning…objects thus minimizing the load on the main GC thread. Due to double GC threads running, the main application threads are less suspended, thus increasing application throughput. To enable server GC, we need to use the gcServer XML tag and enable it to true.” <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> This is not done by default. The MSDN information page says “There are only two garbage collection options, workstation or server. For single-processor computers, the default workstation garbage collection should be the fastest option. Either workstation or server can be used for two-processor computers. Server garbage collection should be the fastest option for more than two processors. Use the GCSettingsIsServerGC property to determine if server garbage collection is enabled.” “In the .NET Framework 4 and earlier versions, concurrent garbage collection is not available when server garbage collection is enabled. Starting with the .NET Framework 4.5, server garbage collection is concurrent. To use non-concurrent server garbage collection, set the <gcServer> element to true and the <gcConcurrent> element to false. “ So if you’re using ASP.Net 4.5 and have a multi-core server, you should try turning on the Server Garbage Collection and do some profiling to see if it improves the performance of your site.

    Read the article

  • Voxel Face Crawling (Mesh simplification, possibly using greedy)

    - by Tim Winter
    This is in regards to a Minecraft-like terrain engine. I store blocks in chunks (16x256x16 blocks in a chunk). When I generate a chunk, I use multiple procedural techniques to set the terrain and to place objects. While generating, I keep one 1D array for the full chunk (solid or not) and a separate 1D array of solid blocks. After generation, I iterate through the solid blocks checking their neighbors so I only generate block faces that don't have solid neighbors. I store which faces to generate in their own list (that's 6 lists, one per possible face). When rendering a chunk, I render all lists in the camera's current chunk and only the lists facing the camera in all other chunks. Using a 2D atlas with this little shader trick Andrew Russell suggested, I want to merge similar faces together completely. That is, if they are in the same list (same normal), are adjacent to each other, have the same light level, etc. My assumption would be to have each of the 6 lists sorted by the axis they rest on, then by the other two axes (the list for the top of a block would be sorted by it's Y value, then X, then Z). With this alone, I could quite easily merge strips of faces, but I'm looking to merge more than just strips together when possible. I've read up on this greedy meshing algorithm, but I am having a lot of trouble understanding it. To even use it, I would think I'd need to perform a type of flood-fill per sorted list to get the groups of merge-able faces. Then, per group, perform the greedy algorithm. It all sounds awfully expensive if I would ever want dynamic terrain/lighting after initial generation. So, my question: To perform merging of faces as described (ignoring whether it's a bad idea for dynamic terrain/lighting), is there perhaps an algorithm that is simpler to implement? I would also quite happily accept an answer that walks me through the greedy algorithm in a much simpler way (a link or explanation). I don't mind a slight performance decrease if it's easier to implement or even if it's only a little better than just doing strips. I worry that most algorithms focus on triangles rather than quads and using a 2D atlas the way I am, I don't know that I could implement something triangle based with my current skills. PS: I already frustum cull per chunk and as described, I also cull faces between solid blocks. I don't occlusion cull yet and may never.

    Read the article

  • Blazing fast performance with RadGridView for WPF 4.0 and Entity Framework 4.0

    Just before our upcoming release of Q1 2010 SP1 (early next week), Ive decided to check how RadGridView for WPF will handle complex Entity Framework 4.0 query with almost 2 million records: public class MyDataContext{    IQueryable _Data;    public IQueryable Data    {        get        {            if (_Data == null)            {                var northwindEntities = new NorthwindEntities();                var queryable = from o in northwindEntities.Orders                               from od in northwindEntities.Order_Details                                select new                                {                                    od.OrderID,                                    od.ProductID,                                    od.UnitPrice,                                    od.Quantity,                                    od.Discount,                                    o.CustomerID,                                    o.EmployeeID,                                    o.OrderDate                                };                _Data = queryable.OrderBy(i => i.OrderID);            }             return _Data;        }    }} The grid is bound completely codeless in XAML using RadDataPager with PageSize set to 50: <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:telerik="http://schemas.telerik.com/2008/xaml/presentation" Title="MainWindow" mc...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >