Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 421/1944 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Windows HPC Server links

    I've already described how to setup a Windows HPC Server for development. Before you dive into developing for the cluster, if you are new to this it is probably a good idea to learn the basics by reading some overview material. Below is a list of links.Direct Links to Windows HPC content1. Windows HPC Server 2008 Overview Datasheet (4 page pdf).2. Windows HPC Server 2008 Technical Overview (32 page doc).3. Windows HPC Server 2008 Getting Started Guide (26 page doc) which actually is available online as part of the TechNet technical library section on Windows HPC Server 2008, which includes much more useful data.4. Windows HPC Server 2008 Job Scheduler (38 page doc).5. Windows HPC Server 2008 Job Templates (56 page doc).6. Developing for the Windows HPC Server 2008 Platform (16 page doc or pdf version).Windows HPC sites7. Windows HPC Forums.8. HPC Developer Resources.9. Windows HPC Server 2008 Resource Kit - Developer.10. Windows HPC Server 2008 - TechNet.11. The Windows HPC Team Blog.HPC Course12. High-Performance Computing Fundamentals Course (pluralisight)13. Classic HPC Development using Visual C++ (course slides and materials in a ZIP). Author's blog post.14. From sequential to parallel code (course slides and materials in a ZIP). Author's blog post. Next time I will post resources specific to the most popular programming models for the cluster today: MPI and Cluster SOA - until then, happy reading! Comments about this post welcome at the original blog.

    Read the article

  • Consolidating and Virtualizing with Oracle&rsquo;s Network Fabric

    - by Ferhat Hatay
    Server, storage and operating system virtualization technologies are already widely  deployed within datacenters, and are considered an integral component to drive cost  savings and agility. These technologies are now being combined with network  virtualization to usher in a new era of cloud computing. Oracle provides a networking fabric that delivers cloud-ready network services based on  Ethernet or InfiniBand fabrics that are tightly integrated with application infrastructure. Oracle’s network fabric provides the performance and manageability required for any  Oracle application environment or private cloud infrastructure. Logical architecture of Oracle’s network fabric. Oracle’s unique ability to deliver extreme performance and scale by tightly integrating  network services across application infrastructure is demonstrated in the Oracle Exalogic  Elastic Cloud and the Oracle Exadata Database Machine. These engineered solutions  offer up to 5X and 10X performance gains respectively compared to traditional multivendor architectures where the offerings are not engineered to work together. By integrating advanced networking capabilities across the entire hardware and software  stack, Oracle’s network fabric can help maximize application performance and scale,  reduce the number of network components, and simplify datacenter operations through  integrated network management and orchestration. The resulting business benefits are: Reduced acquisition costs Lower power and cooling costs Reduced management costs Faster deployment Greater agility in meeting changing business needs For more information see the whitepaper: Consolidating and Virtualizing Datacenter Networks with Oracle's Network Fabric.

    Read the article

  • Webfarm and IIS configuration tips/tricks

    - by steve schofield
    I was recently talking with some good friends about tips for performance and what an IIS Administrator could do on the server side.  I also see this question from time to time in the forums @ http://forums.iis.net.    Of course, you should test individual settings in a controlled environment while performing load testing before just implementing on your production farm.  IIS Compression enabled (both static and dynamic if possible, set it to 9)  If you are running IIS 6, check this article out by Scott Forsyth. Run FRT for long running pages (Failed Request Tracing) Sql Connection pooling in code Look at using PAL with performance counters ( http://blogs.iis.net/ganekar/archive/2009/08/12/pal-performance-analyzer-with-iis.aspx )  Look at load testing using visual studio load testing tools Log parser finding long running pages.  Here is a couple examples Look at CPU, Memory and disk counters.  Make sure the server has enough resources. Same machineKey account across all same nodes Localize content vs. using UNC based content on a single server (My UNC tag with great posts) Content expiration ETAG’s the same across all web-farms Disable Scalable Networking Pack Use YSlow or Developer tools in Chrome to help measure the client experience improvements. Additionally, some basic counters in for measuring applications is: I would recommend checking out the Chapter 17 in IIS 7 Resource kit. it was one of the chapters I authored. :) Concurrent Connections,  Request Per / Sec, Request Queued.  I strongly suggest testing one change at a time to see how it helps improve your performance.  Hopefully this post provides a few options to review in your environment.   Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The Social Business Thought Leaders - John Hagel

    - by kellsey.ruppel
    While many European economies are on the brink of a recession between increasing taxation and mounting loss of jobs and bankruptcy filing rates, there's an understandable risk of losing sight of the deeper forces at play. Yet instead of surrendering to uncertainty and trying to survive in the short term, many organizations are feeling the urge to be better prepared to thrive in these complex times by developing a more articulated long term understanding of both the opportunities / challenges ahead. For example: What long-term economic, technological and societal changes are rolling out? Which foundational dynamics will affect our companies' performance, productivity, competition, and innovative potential in the upcoming decades? How will digital infrastructure change our business landscape? What kind of capabilities will be key to compete in a market shaped by growing turbulence, unpredictability and volatility? Breaking out from a strictly cyclical thinking, studies such as the Shift Index by John Hagel, Co-Chairman of the Center for the Edge at Deloitte & Touche (See Measuring the forces of long-term change - The 2009 Shift Index), depict a worrying performance challenge that affected every industry in the entire US economy over the last 45 years. Amidst a more than doubled competitive intensity of the market, and even with an improved labor productivity, the actual performance of US firms has consistently fallen to 25% of what it was in 1965. Most of this reported value is shifting from institutions and organizations to individuals, whether they are customers or young creative talent. To thrive in the digital economy and reverse declining performance trends, companies will have to fundamentally rethink their management approach by moving from knowledge stocks to knowledge flows, from scalable efficiency to scalable learning, from push organizations to pull organizations. Based on the outcomes of the Shift Index and on the book The Power of Pull, the first episode of the Social Business Thought-Leaders features John Hagel to provide strategic insights on how companies will succeed in the 21st century.

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • New Whitepaper: Best Practices for Gathering EBS Database Statistics

    - by Elke Phelps (Oracle Development)
    Most Oracle Applications DBAs and E-Business Suite users understand the importance of accurate database statistics.  Missing, stale or skewed statistics can adversely affect performance.  Oracle E-Business Suite statistics should only be gathered using FND_STATS or the Gather Statistics concurrent request. Gathering statistics with DBMS_STATS or the desupported ANALYZE command may result in suboptimal executions plans for E-Business Suite. Our E-Business Suite Performance Team has been busy implementing and testing new features for gathering statistics using FND_STATS in Oracle E-Business Suite databases.  The new features and guidelines for when and how to gather statistics are published in the following whitepaper: Best Practices for Gathering Statistics with Oracle E-Business Suite (Note 1586374.1) The new white paper details the following options for gathering statistics using FND_STATS and the Gather Statistics concurrent request:: History Mode - backup existing statistics prior to gather new statistics GATHER_AUTO Option - gather statistics for tables based upon % change Histograms - collect statistics for histograms AUTO Sampling - use the new FND_STATS feature that supports the AUTO option for using AUTO sample size Extended Statistics - use the new FND_STATS feature that supports the creation of column groups and automatic statistics collection on the column groups when table statistics are gathered Incremental Statistics - gather incremental statistics for partitioned tables The new white paper also includes examples and performance test cases for the following: Extended Optimizer Statistics Incremental Statistics Gathering Concurrent Statistics Gathering This white paper includes details about the standalone Oracle E-Business Suite Release 11i and 12 patches that are required to take advantage of this new functionality. Your feedback is welcome We would be very interested in hearing about your experiences with these new options for gathering statistics.  Please feel free to post your comments here or drop us a line privately.Related Oracle OpenWorld 2013 Session Getting Optimal Performance from Oracle E-Business Suite (CON8485) Related My Oracle Support Notes Collecting Statistics with Oracle EBS 11i and R12 (Note 368252.1) Non-EBS Related Blogs, White Papers and My Oracle Support Notes  Oracle Optimizer Blog Understanding Optimizer Statistic (white paper) Fixed Objects Statistics(GATHER_FIXED_OBJECTS_STATS) Considerations (Note 798257.1)

    Read the article

  • Discover the MySQL Connect Content Catalog!

    - by Bertrand Matthelié
    The MySQL Connect content catalog is now live! MySQL Connect offers you a unique opportunity to attend:Keynotes including: "The State of the Dolphin", by Oracle's Chief Corporate Architect Edward Screven and VP of MySQL Engineering Tomas Ulin. An exciting panel on "Current MySQL Usage Models and Future Developments" with Davi Arnaud from LinkedIn, Daniel Austin from PayPal, Mark Callaghan from Facebook and Calvin Sun from Twitter. Over 65 Conference sessions enabling you to hear from: Oracle MySQL engineers on MySQL 5.6, InnoDB, replication, performance tuning, security, NoSQL, MySQL Cluster, Big Data...and more. MySQL customers including the US Census Bureau, Big Fish Games, Booking.com, Ticketmaster, and Tumblr. Internationally recognized MySQL community members and partners on topics such as performance, MySQL 5.6, backup, MySQL in the Cloud, OpenStack and Hadoop. 6 Birds-of-a-feather sessions about sharding, replication, backup, and other subjects.8 Hands-On Labs designed to give you hands-on experience about MySQL replication, the MySQL Performance Schema, MySQL Cluster...and more.6 Tutorials providing you in-depth knowledge about MySQL Performance Tuning best practices, enhancing productivity with MySQL 5.6 new features or the essentials to get started with MySQL (tutorials are available as an add-on package to MySQL Connect registrants).Demo pods and exhibitors, to learn more about Partner’s and Oracle’s offerings.Receptions on both Saturday and Sunday nights, enabling you to ask all your questions to Oracle's MySQL engineers and to network with some of the world’s best MySQL professionals.Check out the MySQL Connect content catalog and find out about the amazing sessions you have the opportunity to attend.Reminder: The early bird discount is running until July 19, Register Now to save US$500! Plan to Attend Oracle OpenWorld or JavaOne? Add the MySQL Connect event to your Oracle OpenWorld or JavaOne registration for only US$100. Exhibit/Sponsorship opportunities are also available. We look forward to seeing you at MySQL Connect!

    Read the article

  • C#/.NET &ndash; Finding an Item&rsquo;s Index in IEnumerable&lt;T&gt;

    - by James Michael Hare
    Sorry for the long blogging hiatus.  First it was, of course, the holidays hustle and bustle, then my brother and his wife gave birth to their son, so I’ve been away from my blogging for two weeks. Background: Finding an item’s index in List<T> is easy… Many times in our day to day programming activities, we want to find the index of an item in a collection.  Now, if we have a List<T> and we’re looking for the item itself this is trivial: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // can find the exact item using IndexOf() 5: var pos = list.IndexOf(64); This will return the position of the item if it’s found, or –1 if not.  It’s easy to see how this works for primitive types where equality is well defined.  For complex types, however, it will attempt to compare them using EqualityComparer<T>.Default which, in a nutshell, relies on the object’s Equals() method. So what if we want to search for a condition instead of equality?  That’s also easy in a List<T> with the FindIndex() method: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // finds index of first even number or -1 if not found. 5: var pos = list.FindIndex(i => i % 2 == 0);   Problem: Finding an item’s index in IEnumerable<T> is not so easy... This is all well and good for lists, but what if we want to do the same thing for IEnumerable<T>?  A collection of IEnumerable<T> has no indexing, so there’s no direct method to find an item’s index.  LINQ, as powerful as it is, gives us many tools to get us this information, but not in one step.  As with almost any problem involving collections, there are several ways to accomplish the same goal.  And once again as with almost any problem involving collections, the choice of the solution somewhat depends on the situation. So let’s look at a few possible alternatives.  I’m going to express each of these as extension methods for simplicity and consistency. Solution: The TakeWhile() and Count() combo One of the things you can do is to perform a TakeWhile() on the list as long as your find condition is not true, and then do a Count() of the items it took.  The only downside to this method is that if the item is not in the list, the index will be the full Count() of items, and not –1.  So if you don’t know the size of the list beforehand, this can be confusing. 1: // a collection of extra extension methods off IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item in the collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // note if item not found, result is length and not -1! 8: return list.TakeWhile(i => !finder(i)).Count(); 9: } 10: } Personally, I don’t like switching the paradigm of not found away from –1, so this is one of my least favorites.  Solution: Select with index Many people don’t realize that there is an alternative form of the LINQ Select() method that will provide you an index of the item being selected: 1: list.Select( (item,index) => do something here with the item and/or index... ) This can come in handy, but must be treated with care.  This is because the index provided is only as pertains to the result of previous operations (if any).  For example: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // you'd hope this would give you the indexes of the even numbers 5: // which would be 2, 3, 8, but in reality it gives you 0, 1, 2 6: list.Where(item => item % 2 == 0).Select((item,index) => index); The reason the example gives you the collection { 0, 1, 2 } is because the where clause passes over any items that are odd, and therefore only the even items are given to the select and only they are given indexes. Conversely, we can’t select the index and then test the item in a Where() clause, because then the Where() clause would be operating on the index and not the item! So, what we have to do is to select the item and index and put them together in an anonymous type.  It looks ugly, but it works: 1: // extensions defined on IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // finds an item in a collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // if you don't name the anonymous properties they are the variable names 8: return list.Select((item, index) => new { item, index }) 9: .Where(p => finder(p.item)) 10: .Select(p => p.index + 1) 11: .FirstOrDefault() - 1; 12: } 13: }     So let’s look at this, because i know it’s convoluted: First Select() joins the items and their indexes into an anonymous type. Where() filters that list to only the ones matching the predicate. Second Select() picks the index of the matches and adds 1 – this is to distinguish between not found and first item. FirstOrDefault() returns the first item found from the previous clauses or default (zero) if not found. Subtract one so that not found (zero) will be –1, and first item (one) will be zero. The bad thing is, this is ugly as hell and creates anonymous objects for each item tested until it finds the match.  This concerns me a bit but we’ll defer judgment until compare the relative performances below. Solution: Convert ToList() and use FindIndex() This solution is easy enough.  We know any IEnumerable<T> can be converted to List<T> using the LINQ extension method ToList(), so we can easily convert the collection to a list and then just use the FindIndex() method baked into List<T>. 1: // a collection of extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // find the index of an item in the collection similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: return list.ToList().FindIndex(finder); 8: } 9: } This solution is simplicity itself!  It is very concise and elegant and you need not worry about anyone misinterpreting what it’s trying to do (as opposed to the more convoluted LINQ methods above). But the main thing I’m concerned about here is the performance hit to allocate the List<T> in the ToList() call, but once again we’ll explore that in a second. Solution: Roll your own FindIndex() for IEnumerable<T> Of course, you can always roll your own FindIndex() method for IEnumerable<T>.  It would be a very simple for loop which scans for the item and counts as it goes.  There’s many ways to do this, but one such way might look like: 1: // extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item matching a predicate in the enumeration, much like List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: int index = 0; 8: foreach (var item in list) 9: { 10: if (finder(item)) 11: { 12: return index; 13: } 14:  15: index++; 16: } 17:  18: return -1; 19: } 20: } Well, it’s not quite simplicity, and those less familiar with LINQ may prefer it since it doesn’t include all of the lambdas and behind the scenes iterators that come with deferred execution.  But does having this long, blown out method really gain us much in performance? Comparison of Proposed Solutions So we’ve now seen four solutions, let’s analyze their collective performance.  I took each of the four methods described above and run them over 100,000 iterations of lists of size 10, 100, 1000, and 10000 and here’s the performance results.  Then I looked for targets at the begining of the list (best case), middle of the list (the average case) and not in the list (worst case as must scan all of the list). Each of the times below is the average time in milliseconds for one execution as computer over the 100,000 iterations: Searches Matching First Item (Best Case)   10 100 1000 10000 TakeWhile 0.0003 0.0003 0.0003 0.0003 Select 0.0005 0.0005 0.0005 0.0005 ToList 0.0002 0.0003 0.0013 0.0121 Manual 0.0001 0.0001 0.0001 0.0001   Searches Matching Middle Item (Average Case)   10 100 1000 10000 TakeWhile 0.0004 0.0020 0.0191 0.1889 Select 0.0008 0.0042 0.0387 0.3802 ToList 0.0002 0.0007 0.0057 0.0562 Manual 0.0002 0.0013 0.0129 0.1255   Searches Where Not Found (Worst Case)   10 100 1000 10000 TakeWhile 0.0006 0.0039 0.0381 0.3770 Select 0.0012 0.0081 0.0758 0.7583 ToList 0.0002 0.0012 0.0100 0.0996 Manual 0.0003 0.0026 0.0253 0.2514   Notice something interesting here, you’d think the “roll your own” loop would be the most efficient, but it only wins when the item is first (or very close to it) regardless of list size.  In almost all other cases though and in particular the average case and worst case, the ToList()/FindIndex() combo wins for performance, even though it is creating some temporary memory to hold the List<T>.  If you examine the algorithm, the reason why is most likely because once it’s in a ToList() form, internally FindIndex() scans the internal array which is much more efficient to iterate over.  Thus, it takes a one time performance hit (not including any GC impact) to create the List<T> but after that the performance is much better. Summary If you’re concerned about too many throw-away objects, you can always roll your own FindIndex() method, but for sheer simplicity and overall performance, using the ToList()/FindIndex() combo performs best on nearly all list sizes in the average and worst cases.    Technorati Tags: C#,.NET,Litte Wonders,BlackRabbitCoder,Software,LINQ,List

    Read the article

  • Partner Webcast - Is your Application Ready? Prove it with the Oracle Exastack Program

    - by Thanos
    At Oracle we design Engineered Systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimizes performance at every IT layer to simplify business operations, drive down costs and accelerate business innovation.As the Engineered System foundation platform, Oracle Exadata and Oracle Exalogic, run all of Oracle Cloud's services across a range of global data centers, delivering extreme performance, massive scalability, and fault tolerance that has no single point of failure.The Oracle Exastack Program enables you as an ISV to leverage Oracle's scalable, integrated infrastructure to test, tune and optimize your applications for high performance. By getting Exastack Ready and Exastack Optimized, your applications get formal recognition from Oracle and additional visibility, while you as an ISV receive additional set of OPN benefits. Don't miss this opportunity to learn more about how you can optimize your applications to run faster and more reliably leveraging Oracle Exastack, but also become more competitive letting everybody know you are ready. Agenda: Oracle Engineered Systems Strategy OPN Exastack Program Benefits & Objectives Value for You Oracle is resourced for your success How to Apply –Demo Next Steps & Useful contacts Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday 06 December 2012, 10.00 CET (GMT+1) Duration: 1 hour Register Now! " height="6"> For any questions please contact us at [email protected] our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix

    Read the article

  • OpenWorld 2012—Is Almost Here!

    - by Scott McNeil
    With OpenWorld fast approaching, I thought I would take this opportunity to look at some of the “must see” database manageability activities and sessions happening this year. Here's a quick run down: Oracle Database Manageability: Download all the details for sessions, hands-on-labs, and demos (PDF) Keynotes: Sunday, September 30 Hardware and Software, Engineered to Work Together: Why It’s A Different Approach Larry Ellison, CEO, Oracle Monday, October 1 Shift Complexity Hosted by Mark Hurd, President, Oracle Andrew Mendelsohn, Senior Vice President, Database Server Technologies, Oracle IOUG SIG Sunday: Database Performance Tuning: Getting the Best out of Oracle Enterprise Manager Cloud Control 12c (session ID# CON6511) Oracle DEMOgrounds: Floor plan – Moscone South Automatic Application and SQL Tuning Automatic Performance Diagnostics Complete Database Lifecycle Management Data Masking and Data Subsetting Database Testing with Oracle Real Application Testing Oracle Enterprise Manager Cloud Control 12c Overview Oracle Exadata Management Hands-on-Labs: Database Performance Testing, Data Masking, and Subsetting (session ID# HOL10720) Database Performance Tuning Hands-on Lab (session ID# HOL10393) Sessions: What’s Next for Oracle Database? (session ID# GEN8259) Building and Managing a Private Oracle Database Cloud (session ID# GEN11421) Using Oracle Enterprise Manager to Manage Your Own Private Cloud (session ID# GEN11423) Extreme Database Management with the Latest Generation of Database Technology (session ID# CON9547) Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Ajax Control Toolkit May 2012 Release

    - by Stephen.Walther
    I’m happy to announce the May 2012 release of the Ajax Control Toolkit. This newest release of the Ajax Control Toolkit includes a new file upload control which displays file upload progress. We’ve also added several significant enhancements to the existing HtmlEditorExtender control such as support for uploading images and Source View. You can download and start using the newest version of the Ajax Control Toolkit by entering the following command in the Library Package Manager console in Visual Studio: Install-Package AjaxControlToolkit Alternatively, you can download the latest version of the Ajax Control Toolkit from CodePlex: http://AjaxControlToolkit.CodePlex.com The New Ajax File Upload Control The most requested new feature for the Ajax Control Toolkit (according to the CodePlex Issue Tracker) has been support for file upload with progress. We worked hard over the last few months to create an entirely new file upload control which displays upload progress. Here is a sample which illustrates how you can use the new AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="01_FileUpload.aspx.cs" Inherits="WebApplication1._01_FileUpload" %> <html> <head runat="server"> <title>Simple File Upload</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager control. This control is required to use any of the controls in the Ajax Control Toolkit because this control is responsible for loading all of the scripts required by a control. The page also contains an AjaxFileUpload control. The UploadComplete event is handled in the code-behind for the page: namespace WebApplication1 { public partial class _01_FileUpload : System.Web.UI.Page { protected void ajaxUpload1_OnUploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save upload file to the file system ajaxUpload1.SaveAs(MapPath(filePath)); } } } The UploadComplete handler saves each uploaded file by calling the AjaxFileUpload control’s SaveAs() method with a full file path. Here’s a video which illustrates the process of uploading a file: Warning: in order to write to the Images folder on a production IIS server, you need Write permissions on the Images folder. You need to provide permissions for the IIS Application Pool account to write to the Images folder. To learn more, see: http://learn.iis.net/page.aspx/624/application-pool-identities/ Showing File Upload Progress The new AjaxFileUpload control takes advantage of HTML5 upload progress events (described in the XMLHttpRequest Level 2 standard). This standard is supported by Firefox 8+, Chrome 16+, Safari 5+, and Internet Explorer 10+. In other words, the standard is supported by the most recent versions of all browsers except for Internet Explorer which will support the standard with the release of Internet Explorer 10. The AjaxFileUpload control works with all browsers, even browsers which do not support the new XMLHttpRequest Level 2 standard. If you use the AjaxFileUpload control with a downlevel browser – such as Internet Explorer 9 — then you get a simple throbber image during a file upload instead of a progress indicator. Here’s how you specify a throbber image when declaring the AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="02_FileUpload.aspx.cs" Inherits="WebApplication1._02_FileUpload" %> <html> <head id="Head1" runat="server"> <title>File Upload with Throbber</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="MyThrobber" runat="server" /> <asp:Image id="MyThrobber" ImageUrl="ajax-loader.gif" Style="display:None" runat="server" /> </div> </form> </body> </html> Notice that the page above includes an image with the Id MyThrobber. This image is displayed while files are being uploaded. I use the website http://AjaxLoad.info to generate animated busy wait images. Drag-And-Drop File Upload If you are using an uplevel browser then you can drag-and-drop the files which you want to upload onto the AjaxFileUpload control. The following video illustrates how drag-and-drop works: Remember that drag-and-drop will not work on Internet Explorer 9 or older. Accepting Multiple Files By default, the AjaxFileUpload control enables you to upload multiple files at a time. When you open the file dialog, use the CTRL or SHIFT key to select multiple files. If you want to restrict the number of files that can be uploaded then use the MaximumNumberOfFiles property like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" MaximumNumberOfFiles="1" runat="server" /> In the code above, the maximum number of files which can be uploaded is restricted to a single file. Restricting Uploaded File Types You might want to allow only certain types of files to be uploaded. For example, you might want to accept only image uploads. In that case, you can use the AllowedFileTypes property to provide a list of allowed file types like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" AllowedFileTypes="jpg,jpeg,gif,png" runat="server" /> The code above prevents any files except jpeg, gif, and png files from being uploaded. Enhancements to the HTMLEditorExtender Over the past months, we spent a considerable amount of time making bug fixes and feature enhancements to the existing HtmlEditorExtender control. I want to focus on two of the most significant enhancements that we made to the control: support for Source View and support for uploading images. Adding Source View Support to the HtmlEditorExtender When you click the Source View tag, the HtmlEditorExtender changes modes and displays the HTML source of the contents contained in the TextBox being extended. You can use Source View to make fine-grain changes to HTML before submitting the HTML to the server. For reasons of backwards compatibility, the Source View tab is disabled by default. To enable Source View, you need to declare your HtmlEditorExtender with the DisplaySourceTab property like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="05_SourceView.aspx.cs" Inherits="WebApplication1._05_SourceView" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head id="Head1" runat="server"> <title>HtmlEditorExtender with Source View</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:TextBox id="txtComments" TextMode="MultiLine" Columns="60" Rows="10" Runat="server" /> <ajaxToolkit:HtmlEditorExtender id="HEE1" TargetControlID="txtComments" DisplaySourceTab="true" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager, TextBox, and HtmlEditorExtender control. The HtmlEditorExtender extends the TextBox so that it supports rich text editing. Notice that the HtmlEditorExtender includes a DisplaySourceTab property. This property causes a button to appear at the bottom of the HtmlEditorExtender which enables you to switch to Source View: Note: when using the HtmlEditorExtender, we recommend that you set the DOCTYPE for the document. Otherwise, you can encounter weird formatting issues. Accepting Image Uploads We also enhanced the HtmlEditorExtender to support image uploads (another very highly requested feature at CodePlex). The following video illustrates the experience of adding an image to the editor: Once again, for backwards compatibility reasons, support for image uploads is disabled by default. Here’s how you can declare the HtmlEditorExtender so that it supports image uploads: <ajaxToolkit:HtmlEditorExtender id="MyHtmlEditorExtender" TargetControlID="txtComments" OnImageUploadComplete="MyHtmlEditorExtender_ImageUploadComplete" DisplaySourceTab="true" runat="server" > <Toolbar> <ajaxToolkit:Bold /> <ajaxToolkit:Italic /> <ajaxToolkit:Underline /> <ajaxToolkit:InsertImage /> </Toolbar> </ajaxToolkit:HtmlEditorExtender> There are two things that you should notice about the code above. First, notice that an InsertImage toolbar button is added to the HtmlEditorExtender toolbar. This HtmlEditorExtender will render toolbar buttons for bold, italic, underline, and insert image. Second, notice that the HtmlEditorExtender includes an event handler for the ImageUploadComplete event. The code for this event handler is below: using System.Web.UI; using AjaxControlToolkit; namespace WebApplication1 { public partial class _06_ImageUpload : System.Web.UI.Page { protected void MyHtmlEditorExtender_ImageUploadComplete(object sender, AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save uploaded file to the file system var ajaxFileUpload = (AjaxFileUpload)sender; ajaxFileUpload.SaveAs(MapPath(filePath)); // Update client with saved image path e.PostedUrl = Page.ResolveUrl(filePath); } } } Within the ImageUploadComplete event handler, you need to do two things: 1) Save the uploaded image (for example, to the file system, a database, or Azure storage) 2) Provide the URL to the saved image so the image can be displayed within the HtmlEditorExtender In the code above, the uploaded image is saved to the ~/Images folder. The path of the saved image is returned to the client by setting the AjaxFileUploadEventArgs PostedUrl property. Not surprisingly, under the covers, the HtmlEditorExtender uses the AjaxFileUpload. You can get a direct reference to the AjaxFileUpload control used by an HtmlEditorExtender by using the following code: void Page_Load() { var ajaxFileUpload = MyHtmlEditorExtender.AjaxFileUpload; ajaxFileUpload.AllowedFileTypes = "jpg,jpeg"; } The code above illustrates how you can restrict the types of images that can be uploaded to the HtmlEditorExtender. This code prevents anything but jpeg images from being uploaded. Summary This was the most difficult release of the Ajax Control Toolkit to date. We iterated through several designs for the AjaxFileUpload control – with each iteration, the goal was to make the AjaxFileUpload control easier for developers to use. My hope is that we were able to create a control which Web Forms developers will find very intuitive. I want to thank the developers on the Superexpert.com team for their hard work on this release.

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle's SPARC T4, 007 Style

    - by Kristin Rose
    The names 4, T4, and this power house travels hand in hand with its good friend SPARC. About 6 years ago on-chip encryption acceleration was first shipped in a commercial system, the SPARC T1. Today, thanks to Oracle SPARC innovative leadership in on-chip encryption acceleration, complex cryptographic computations was born and has since rapidly evolved. Customers can now have security with performance because we my friend, are in the Age of Big Data.If you need some high speed action in your life, listen here. The SPARC T4 systems offer customers much more value for applications than just increased performance through its cross sell opportunity. This is done by enabling partners to integrate your own applications to Oracle’s SPARC T4 Servers for Cloud deployments, and providing direct business benefits that supersedes the commodity approach to data center computing such as security, performance and optimization.As companies continue down this complex path of big data, eCommerce, and mobility, the need to provide better and more in-depth security is more prominent than ever. Oracle’s SPARC T4 processor allows customers to deliver the highest levels of application security, as well as deliver the necessary level performance without added cost, and complexity.To learn more behind the value of SPARC T4, check out a more in-depth blog here. For more on the SPARC T4 family of products, click here.Encryption Lives Another Day,The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • SQLAuthority News – Presenting at Tech-Ed On Road – Ahmedabad – June 11, 2011 – Wait Types and Queues

    - by pinaldave
    I will be presenting in person on the subject SQL Server Wait Types and Queues at Ahmedabad on June 11, 2011. Here is the quick summary of the session. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Time: 11:15am – 12:15pm – June 11, 2011 Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. This session is based on my performance tuning Wait Types and Queues series. SQL SERVER – Summary of Month – Wait Type – Day 28 of 28 During the session there will be Quiz and those who gets right answer will get very interesting gifts from me. Do not miss a single minute of the event. We are also going to have two rock star speakers – Harish Vaidyanathan and Jacob Sebastian. Here is the details for the event: SQLAuthority News – Community Tech Days – TechEd on The Road – Ahmedabad – June 11, 2011 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • Business Analyst role in development process

    - by Ryan
    I work as a business analyst and I currently oversee much of the development efforts of an internal project. I'm responsible for the requirements, specs, and overall testing. I work closely with the developers (onshore and offshore). The offshore team produces all of the reports. Version 1.0 had a 9 month development cycle and I had about 4-5 months to test all the reports. There was the usual back and forth to get the implementation right. Version 2.0 had a much shorter development cycle (3 months). I received the first version of the reports about 3 weeks ago and noticed a lot of things wrong with it. Many of the requirements were wrong and the performance of the queries was horrendous at 5x - 6x longer than it should have been. The onshore lead developer was out and did not supervise the offshore development team in generating the reports. Without consulting management, I took a look at the SQL in the reports and was able to improve performance greatly (by a factor of 6x) which is acceptable for this version. I sent the updated queries as guidelines to the offshore team and told them they should look at doing X instead of Y to improve performance and also to fix some specific logic issues. I then spoke to my managers about this because it doesn't feel right that I was developing SQL queries, but given our time crunch I saw no other way. We were able to fix the issue quite fast which I'm happy with. Current situation: the onshore managers aren't too pleased that the offshore team did not code for performance. I know there are some things I could have done better throughout this process and I do not in any way consider myself a programmer. My question is, if an offshore team that works apart from the onshore project resources fails to deliver an acceptable release, is it appropriate to clean up their work to meet a deadline? What kind of problems could this create in the future?

    Read the article

  • What's New in Business Analytics at Oracle?

    - by jmorourke
    Business Analytics, which includes Business intelligence and Enterprise Performance Management, are top priorities for IT and Finance executives in 2012.  Some of the hot market trends and topics include managing big data, mobile information access, in-memory computing, advanced analytics, predictive modeling, leveraging unstructured data, as well as risk and performance management.  Find out what Oracle is doing about all of this, and what’s new from the market leader in Business Analytics by attending our live webcast event on April 4th titled “Introducing Oracle’s Business Analytics Strategy”.  At this event, you’ll hear about Oracle’s strategy for Business Analytics from Mark Hurd, Oracle President and you can learn about the latest advancements in Oracle’s Business Analytics solutions from Balaji Yelamanchili, SVP of Analytics and Performance Management. The keynote session from Mark and Balaji will be followed by breakout sessions that provide a more in-depth look at what’s new in specific product areas including the latest release of Oracle’s Hyperion Enterprise Performance Management suite, Oracle Business Intelligence Applications and Exalytics In-Memory Machine, Oracle Endeca Information Discovery, Big Data and Advanced Analytics solutions. This event will provide a great opportunity to hear about what’s new in Business Analytics at Oracle, and for attendees to pose questions to Oracle experts during live chat sessions.  Here’s a link to the registration page, and more details about the April 4th event.  We hope to see you (virtually) there! http://www.oracle.com/us/corporate/events/business-analytics/index.html Also, use the following hashtag to follow along on Twitter and share comments during the webcast and Q&A sessions:  #oracleanalytics

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Multiple vulnerabilities in Oracle Java Web Console

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5333 Information Exposure vulnerability 5.0 Apache Tomcat Solaris 10 SPARC: 147673-04 X86: 147674-04 CVE-2007-5342 Permissions, Privileges, and Access Controls vulnerability 6.4 CVE-2007-6286 Request handling vulnerability 4.3 CVE-2008-0002 Information disclosure vulnerability 5.8 CVE-2008-1232 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2008-1947 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2008-2370 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 5.0 CVE-2008-2938 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 4.3 CVE-2008-5515 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 5.0 CVE-2009-0033 Improper Input Validation vulnerability 5.0 CVE-2009-0580 Information Exposure vulnerability 4.3 CVE-2009-0781 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2009-0783 Information Exposure vulnerability 4.6 CVE-2009-2693 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 5.8 CVE-2009-2901 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2009-2902 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') vulnerability 4.3 CVE-2009-3548 Credentials Management vulnerability 7.5 CVE-2010-1157 Information Exposure vulnerability 2.6 CVE-2010-2227 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 CVE-2010-3718 Directory traversal vulnerability 1.2 CVE-2010-4172 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2010-4312 Configuration vulnerability 6.4 CVE-2011-0013 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2011-0534 Resource Management Errors vulnerability 5.0 CVE-2011-1184 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-2204 Information Exposure vulnerability 1.9 CVE-2011-2526 Improper Input Validation vulnerability 4.4 CVE-2011-3190 Permissions, Privileges, and Access Controls vulnerability 7.5 CVE-2011-4858 Resource Management Errors vulnerability 5.0 CVE-2011-5062 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-5063 Improper Authentication vulnerability 4.3 CVE-2011-5064 Cryptographic Issues vulnerability 4.3 CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Introducing the Industry's First Analytics Machine, Oracle Exalytics

    - by Manan Goel
    Analytics is all about gaining insights from the data for better decision making. The business press is abuzz with examples of leading organizations across the world using data-driven insights for strategic, financial and operational excellence. A recent study on “data-driven decision making” conducted by researchers at MIT and Wharton provides empirical evidence that “firms that adopt data-driven decision making have output and productivity that is 5-6% higher than the competition”. The potential payoff for firms can range from higher shareholder value to a market leadership position. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics In-Memory Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability. Oracle Exalytics’s unmatched speed, visualizations and scalability delivers extreme performance for existing analytical and enterprise performance management applications and enables a new class of intelligent applications like Yield Management, Revenue Management, Demand Forecasting, Inventory Management, Pricing Optimization, Profitability Management, Rolling Forecast and Virtual Close etc. Requiring no application redesign, Oracle Exalytics can be deployed in existing IT environments by itself or in conjunction with Oracle Exadata and/or Oracle Exalogic to enable extreme performance and best in class user experience. Based on proven hardware, software and in-memory technology, Oracle Exalytics lowers the total cost of ownership, reduces operational risk and provides unprecedented analytical capability for workgroup, departmental and enterprise wide deployments. Click here to learn more about Oracle Exalytics.  

    Read the article

  • SQL SERVER – Using MAXDOP 1 for Single Processor Query – SQL in Sixty Seconds #008 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Speed up! – Parallel Processes and Unparalleled Performance. There are always special cases when it is about SQL Server. There are always few queries which gives optimal performance when they are executed on single processor and there are always queries which gives optimal performance when they are executed on multiple processors. I will be presenting the how to identify such queries as well what are the best practices related to the same. In this quick video I am going to demonstrate if the query is giving optimal performance when running on single CPU how one can restrict queries to single CPU by using hint OPTION (MAXDOP 1). More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >