Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 151/361 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Test Environment for Microsoft SQL Cluster

    - by user195643
    I have a the following test environment: 1 - Windows 2008 (DC Edition) Role - Active Directory 2 - Windows 2008 (Enterprise Servers) I would like to create a MSSQL Cluster in this test environment. I am using a desktop PC and I would like to know how I can configure second network card in order to configure the MSSQL Cluster? and how I can use a shared storage without using any external drives?

    Read the article

  • Managing servers over ssh with PermitRootLogin=no

    - by rickC
    If I can't ssh as root to each of my servers how can I make modifications in an efficient way? I am not allowed to setup ssh keys or open the sudoers file with NOPASSWD. I can't install puppet or spacewalk. Sometimes when I try to include a sudo command in a script I get the error "no tty present." Has anyone worked in an environment like this?

    Read the article

  • Quick method to determine SSD drive health?

    - by ewwhite
    I have an Intel X-25M drive that was marked "failed" twice in a ZFS storage array, as noted here. However, after removing the drive, it seems to to mount, read and write in other computers (Mac, PC, USB enclosure, etc.) Is there a good way to determine the drive's present health? I feel that the previous failure in the ZFS solution was the convergence of bugs, bad error reporting and hardware. It seems like this drive may have some life in it, though.

    Read the article

  • Determining Azure SQL Database requirements

    - by Gerald
    I'm looking into moving an SQL Server database project to the cloud using Azure SQL Database. I'm just wondering what metrics I can use from SQL Server to help determine what my needs will be on Azure. The size of the database is around 150GB, so I understand what my needs are in terms of storage, I'm just not sure what metrics I can use to translate my database usage to the DTU benchmark metrics that the various service tiers on Azure SQL use.

    Read the article

  • Small Business Setup SSO LDAP VPN [closed]

    - by outsmartin
    We are not sure how to setup an efficient network. Things we got so far: Linux Server ( probably Debian ) 3 Desktops + some Laptops ( Win / linux ) NAS ~10 people working 50/50 devs/normal people :) Things we want to achieve: Working from home should be easy, VPN and firewall single username/password for everybody windows/linux desktops should have automatic synched home folders / preferably from the NAS automated hostnames for apps so others can access them like http//john.dev_app from everywhere in the VPN Need starting point and documentation on setting up with Open source tools like OpenVPN and OpenLDAP Any recommendations or links to further literature are welcome.

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • C++0x rvalue references - lvalues-rvalue binding

    - by Doug
    This is a follow-on question to http://stackoverflow.com/questions/2748866/c0x-rvalue-references-and-temporaries In the previous question, I asked how this code should work: void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } It seems that the move overload should probably be called because of the implicit temporary, and this happens in GCC but not MSVC (or the EDG front-end used in MSVC's Intellisense). What about this code? void f(std::string &&); //NB: No const string & overload supplied void g1(const char * arg) { f(arg); } void g2(const std::string & arg) { f(arg); } It seems that, based on the answers to my previous question that function g1 is legal (and is accepted by GCC 4.3-4.5, but not by MSVC). However, GCC and MSVC both reject g2 because of clause 13.3.3.1.4/3, which prohibits lvalues from binding to rvalue ref arguments. I understand the rationale behind this - it is explained in N2831 "Fixing a safety problem with rvalue references". I also think that GCC is probably implementing this clause as intended by the authors of that paper, because the original patch to GCC was written by one of the authors (Doug Gregor). However, I don't this is quite intuitive. To me, (a) a const string & is conceptually closer to a string && than a const char *, and (b) the compiler could create a temporary string in g2, as if it were written like this: void g2(const std::string & arg) { f(std::string(arg)); } Indeed, sometimes the copy constructor is considered to be an implicit conversion operator. Syntactically, this is suggested by the form of a copy constructor, and the standard even mentions this specifically in clause 13.3.3.1.2/4, where the copy constructor for derived-base conversions is given a higher conversion rank than other implicit conversions: A conversion of an expression of class type to the same class type is given Exact Match rank, and a conversion of an expression of class type to a base class of that type is given Conversion rank, in spite of the fact that a copy/move constructor (i.e., a user-defined conversion function) is called for those cases. (I assume this is used when passing a derived class to a function like void h(Base), which takes a base class by value.) Motivation My motivation for asking this is something like the question asked in http://stackoverflow.com/questions/2696156/how-to-reduce-redundant-code-when-adding-new-c0x-rvalue-reference-operator-over ("How to reduce redundant code when adding new c++0x rvalue reference operator overloads"). If you have a function that accepts a number of potentially-moveable arguments, and would move them if it can (e.g. a factory function/constructor: Object create_object(string, vector<string>, string) or the like), and want to move or copy each argument as appropriate, you quickly start writing a lot of code. If the argument types are movable, then one could just write one version that accepts the arguments by value, as above. But if the arguments are (legacy) non-movable-but-swappable classes a la C++03, and you can't change them, then writing rvalue reference overloads is more efficient. So if lvalues did bind to rvalues via an implicit copy, then you could write just one overload like create_object(legacy_string &&, legacy_vector<legacy_string> &&, legacy_string &&) and it would more or less work like providing all the combinations of rvalue/lvalue reference overloads - actual arguments that were lvalues would get copied and then bound to the arguments, actual arguments that were rvalues would get directly bound. Questions My questions are then: Is this a valid interpretation of the standard? It seems that it's not the conventional or intended one, at any rate. Does it make intuitive sense? Is there a problem with this idea that I"m not seeing? It seems like you could get copies being quietly created when that's not exactly expected, but that's the status quo in places in C++03 anyway. Also, it would make some overloads viable when they're currently not, but I don't see it being a problem in practice. Is this a significant enough improvement that it would be worth making e.g. an experimental patch for GCC?

    Read the article

  • Problem in working with async and await?

    - by Vicky
    I am trying to upload files to Azure Blob Storage and after successful upload adding the filename to a list for my further operation. When i am doing synchronous it works fine but when i am doing async the error occured. Error : Collection was modified; enumeration operation may not execute. foreach(var file in files) { // ..... await blockBlob.UploadFromStreamAsync(fs); listOfMovedLabelFiles.Add(fileName); } if (listOfMovedLabelFiles.Count > 0) // error point { // my code for further operation } Is there any way to wait till all the async operations get completed.

    Read the article

  • OBIEE 11.1.1 - OBIEE 11g Full Sample App on VMware Player 4

    - by user809526
    The Full Sample App is designed to run on Virtual Box. Let's describe how to run it on VMware Player 4. Open Virtualization Format Tool http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf VMware Player Documentation https://www.vmware.com/support/pubs/player_pubs.html Full Sample App Deployment Guide sampleapp107-vbimage-deployguide-453583.pdf INSTALL VMplayer 4.0.0 as root LINUX # sh VMware-Player-4.0.0-471780.x86_64.bundle (A new VM is not needed and can be deleted later after that installation is completed. "I will install OS later" - blank hard disk Guest: linux, Red Hat Enterprise Linux 5-64bits => rename to RHEL target: eg /a/root/vmware/ Max disk size: 5 GB (will be deleted) Disk: Single file Dummy RHEL.vmk, RHEL.vmdk is generated. "Delete VM from Disk" in VM Player.) Copy Full Sample App files to target /a/root/vmware/ WARNING: Select a target eg /a/root/vmware/ with lots of free space, 95 GB. Check checksums (md5sum). Please do it! ff85c7eacf7fb8c382e98da875e879e1  Sampleapp_v107_GA-disk1.vmdk 973258cb3c7d64ab03ae853278cf2233  Sampleapp_v107_GA-disk2.vmdk e576be16e36d810479736bfb15d050f5  Sampleapp_v107_GA-disk3.vmdk 3455df77279e53e07d5fee6712f1597d  Sampleapp_v107_GA-disk4.vmdk OVF FILE   Sampleapp_v107_GA.ovf CONVERSION $ cd /a/root/vmware/ LINUX $ /usr/bin/ovftool -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA/Sampleapp_v107_GA.ovf Disk Transfer Completed                   Completed successfully WINDOWS CYGWIN $ /cygdrive/c/VMwarePlayer/OVFTool/ovftool.exe -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA\Sampleapp_v107_GA.ovf Disk Transfer Completed Completed successfully /a/root/vmware$ du -sk 49095328    .   [50 GB already occupied] IMPORT - First start of VM Player 4: /usr/bin/vmplayer "Open a Virtual Machine" Browse to /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf [the new generated .ovf] "Import Virtual Machine" dialog Name: Sampleapp_v107_GA Location: /a/root/vmware/Sampleapp_v107_GA/storage [was /home/tdubois/vmware/Sampleapp_v107_GA] "Import" "The import failed because /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf did not pass OVF specification conformance or virtual hardware compliance checks. Click Retry to relax OVF specification..." "Retry" ; Long import /a/root/vmware/Sampleapp_v107_GA/storage/Sampleapp_v107_GA.vmx and new .vmdk files are created. /a/root/vmware$ du -sk 95551384    .   [95 GB occupied] Full Sample App GUEST SETUP "Edit VM settings" min 3GB, 2+ processors, network bridged. For OBIEE + Essbase testing use 8 GB RAM hardware. At first time lauch of Full Sample App, leave OEL booting for several minutes undisturbed. Problem with X display server may occur [/usr/bin/Xorg ; man Xorg]. "Failed to start the X server.... Would you like to view the X server output to diagnose the problem?" "No" [tab key] "Would you like to try to configure the X server? Note that you will need the root password for this." "Yes" [oracle] X Display Settings 800x600 saved in /etc/X11/xorg.conf "Trying to restart the X server" Login as root/oracle in guest OEL. In guest OEL, Virtual Machine > Install VMware Tools... Extract archive VMwareTools-8.8.0-471268.tar.gz all files in writable local directory eg /root In Terminal run Perl script # cd /root/vmware-tools-distrib ; ./vmware-install.pl [keep all default answers] Set keyboard layout System > Preferences > Keyboard > Layouts Restart X server eg System > Log Out root... , relogin Modify X resolution System > Preferences > Screen Resolution Full Sample App OEL login: oracle/oracle ; root/oracle [default US keyboard layout] Credentials are described in the 'sampleapp107-vbimage-deployguide-453583.pdf' The large files in /a/root/vmware/ /a/root/vmware/Sampleapp_v107_GA/ may be removed. FAILURE REMARK: Adding the 4 original Sampleapp_v107_GA-disks[1234].vmdk to VM Player does NOT work as described below. "Edit VM settings" "Remove" "Hard Disk" "Edit VM settings" "Add" "Hard Disk" "Next" "Use an existing virtual disk" "Browse" "Finish" "Keep existing format" "Ok" for each 4 disks settings one by one. Start VM Player 4. "You do not have write access to a partition" Allow all Sampleapp_v107 OEL linux launches. OEL stalls silently after 'Checking filesystems'.

    Read the article

  • Announcing SonicAgile – An Agile Project Management Solution

    - by Stephen.Walther
    I’m happy to announce the public release of SonicAgile – an online tool for managing software projects. You can register for SonicAgile at www.SonicAgile.com and start using it with your team today. SonicAgile is an agile project management solution which is designed to help teams of developers coordinate their work on software projects. SonicAgile supports creating backlogs, scrumboards, and burndown charts. It includes support for acceptance criteria, story estimation, calculating team velocity, and email integration. In short, SonicAgile includes all of the tools that you need to coordinate work on a software project, get stuff done, and build great software. Let me discuss each of the features of SonicAgile in more detail. SonicAgile Backlog You use the backlog to create a prioritized list of user stories such as features, bugs, and change requests. Basically, all future work planned for a product should be captured in the backlog. We focused our attention on designing the user interface for the backlog. Because the main function of the backlog is to prioritize stories, we made it easy to prioritize a story by just drag and dropping the story from one location to another. We also wanted to make it easy to add stories from the product backlog to a sprint backlog. A sprint backlog contains the stories that you plan to complete during a particular sprint. To add a story to a sprint, you just drag the story from the product backlog to the sprint backlog. Finally, we made it easy to track team velocity — the average amount of work that your team completes in each sprint. Your team’s average velocity is displayed in the backlog. When you add too many stories to a sprint – in other words, you attempt to take on too much work – you are warned automatically: SonicAgile Scrumboard Every workday, your team meets to have their daily scrum. During the daily scrum, you can use the SonicAgile Scrumboard to see (at a glance) what everyone on the team is working on. For example, the following scrumboard shows that Stephen is working on the Fix Gravatar Bug story and Pete and Jane have finished working on the Product Details Page story: Every story can be broken into tasks. For example, to create the Product Details Page, you might need to create database objects, do page design, and create an MVC controller. You can use the Scrumboard to track the state of each task. A story can have acceptance criteria which clarify the requirements for the story to be done. For example, here is how you can specify the acceptance criteria for the Product Details Page story: You cannot close a story — and remove the story from the list of active stories on the scrumboard — until all tasks and acceptance criteria associated with the story are done. SonicAgile Burndown Charts You can use Burndown charts to track your team’s progress. SonicAgile supports Release Burndown, Sprint Burndown by Task Estimates, and Sprint Burndown by Story Points charts. For example, here’s a sample of a Sprint Burndown by Story Points chart: The downward slope shows the progress of the team when closing stories. The vertical axis represents story points and the horizontal axis represents time. Email Integration SonicAgile was designed to improve your team’s communication and collaboration. Most stories and tasks require discussion to nail down exactly what work needs to be done. The most natural way to discuss stories and tasks is through email. However, you don’t want these discussions to get lost. When you use SonicAgile, all email discussions concerning a story or a task (including all email attachments) are captured automatically. At any time in the future, you can view all of the email discussion concerning a story or a task by opening the Story Details dialog: Why We Built SonicAgile We built SonicAgile because we needed it for our team. Our consulting company, Superexpert, builds websites for financial services, startups, and large corporations. We have multiple teams working on multiple projects. Keeping on top of all of the work that needs to be done to complete a software project is challenging. You need a good sense of what needs to be done, who is doing it, and when the work will be done. We built SonicAgile because we wanted a lightweight project management tool which we could use to coordinate the work that our team performs on software projects. How We Built SonicAgile We wanted SonicAgile to be easy to use, highly scalable, and have a highly interactive client interface. SonicAgile is very close to being a pure Ajax application. We built SonicAgile using ASP.NET MVC 3, jQuery, and Knockout. We would not have been able to build such a complex Ajax application without these technologies. Almost all of our MVC controller actions return JSON results (While developing SonicAgile, I would have given my left arm to be able to use the new ASP.NET Web API). The controller actions are invoked from jQuery Ajax calls from the browser. We built SonicAgile on Windows Azure. We are taking advantage of SQL Azure, Table Storage, and Blob Storage. Windows Azure enables us to scale very quickly to handle whatever demand is thrown at us. Summary I hope that you will try SonicAgile. You can register at www.SonicAgile.com (there’s a free 30-day trial). The goal of SonicAgile is to make it easier for teams to get more stuff done, work better together, and build amazing software. Let us know what you think!

    Read the article

  • SQL Pre-Con…at the Beach

    - by Argenis
      Building upon the success of SQL Rally 2012 (where we packed a room full of DBAs), my friend Robert Davis [Twitter|Blog] and yours truly will be again delivering our day-long Pre-Conference “Demystifying Database Administration Best Practices” this Friday (6/8/2012) – right before SQLSaturday #132 in Pensacola, FL. If you are in the vicinity of Pensacola, come join us! We had tons of fun at Rally. Robert and I love sharing tips and stories that will help you on your day to day duties as a DBA. Some of the topics that we’ll touch on (this is by no means a comprehensive list) Active Directory configuration for SQL Server Deployments Windows Server Deployments Storage and I/O High Availability / Disaster Recovery / Business Continuity Replication Day-To-Day Operations Maintenance TempDB Code Reviews Other Database and Server Settings   Follow this link to sign up for the Pre-Con at Pensacola: http://demystifyingdba.eventbrite.com/ Here’s a blog post that Robert made on the subject of Best Practices.  Hope to see you there!

    Read the article

  • SQL Pre-Con…at the Beach

    - by Argenis
      Building upon the success of SQL Rally 2012 (where we packed a room full of DBAs), my friend Robert Davis [Twitter|Blog] and yours truly will be again delivering our day-long Pre-Conference “Demystifying Database Administration Best Practices” this Friday (6/8/2012) – right before SQLSaturday #132 in Pensacola, FL. If you are in the vicinity of Pensacola, come join us! We had tons of fun at Rally. Robert and I love sharing tips and stories that will help you on your day to day duties as a DBA. Some of the topics that we’ll touch on (this is by no means a comprehensive list) Active Directory configuration for SQL Server Deployments Windows Server Deployments Storage and I/O High Availability / Disaster Recovery / Business Continuity Replication Day-To-Day Operations Maintenance TempDB Code Reviews Other Database and Server Settings   Follow this link to sign up for the Pre-Con at Pensacola: http://demystifyingdba.eventbrite.com/ Here’s a blog post that Robert made on the subject of Best Practices.  Hope to see you there!

    Read the article

  • Migration from Exchange to BPOS - Microsoft Assessment and Planning (MAP) Toolkit Link

    - by Harish Pavithran
    The Microsoft Assessment and Planning (MAP) Toolkit is an agentless toolkit that finds computers on a network and performs a detailed inventory of the computers using Windows Management Instrumentation (WMI) and the Remote Registry Service. The data and analysis provided by this toolkit can significantly simplify the planning process for migrating to Windows® 7, Windows Vista®, Microsoft Office 2007, Windows Server® 2008 R2, Windows Server 2008, Hyper-V, Microsoft Application Virtualization, Microsoft SQL Server 2008, and Forefront® Client Security and Network Access Protection. Assessments for Windows Server 2008 R2, Windows Server 2008, Windows 7, and Windows Vista include device driver availability as well as recommendations for hardware upgrades. If you are interested in server virtualization planning, MAP provides the ability to gather performance metrics from computers you are considering for virtualization and a feature to model a library of potential host hardware and storage configurations. This information can be used to quickly perform "what-if" analysis using Hyper-V and Microsoft Virtual Server 2005 R2 as virtualization platforms. http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=67240b76-3148-4e49-943d-4d9ea7f77730

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Michelle Kimihira
    As CNN asks you to vote for most intriguing person of the year, what technologies do you think were most intriguing in 2012? Was it Social, Mobile, BPM or were you most captivated by Customer Experience? Well, we too observed these technology trends on the upswing and foresee that these will remain in limelight for 2013. What if we told you that there is a solution that brings these technologies together and helps not only to create efficient business processes but also an engaging customer experience. As we transition into 2013 let’s take a look at some of the top trending topics in BPM.  Ajay Khanna discusses these trends in OracleBPM blog, Bye Bye Year of the Dragon, Hello BPM.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook and YouTube Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Silverlight 5 Hosting :: Features in Silverlight 5 and Release Date

    - by mbridge
    Silverlight 5 is finally announced in the Silverlight FireStarter Event on the 2nd December, 2010. This new version of Silverlight which was earlier labeled as 'Future of Microsoft Silverlight' has now come much closer to go live as the first Silverlight 5 Beta version is expected to be shipped during the early months of 2011. However for the full fledged and the final release of Silverlight 5, we have to wait many more months as the same is likely to be made available within the Q3 2011. As would have been usually expected, this latest edition would feature many new capabilities thereby extending the developer productivity to a whole new dimension of premium media experience and feature-rich business applications. It comes along with many new feature updates as well as the inclusion of new technologies to improve the standard of the Silverlight applications which are now fine-tuned to produce next generation business and media solutions that is capable to meet the requirements of the advanced web-based app development. The Silverlight 5 is all set to replace the previous fourth version which now includes more than forty new features while also dropping various deprecated elements that was prevalent earlier. It has brought around some major performance enhancements and also included better support for various other tools and technologies. Following are some of the changes that are registered to be available under the Silverlight 5 Beta edition which is scheduled to be launched during the Q1 2011. Silverlight 5 : Premium Media Experiences The media features of Silverlight 5 has seen some major enhancements with a lot of optimizations being made to deliver richer solutions. It's capability has now been extended to make things easier, faster and capable of performing the desired tasks in the most efficient manner. The Silverlight media solutions has already been a part of many companies in the recent days where various on-demand Silverlight services were featured but with the arrival of the next generation premium media solution of Silverlight 5, it is expected to register new heights of success and global user acclamation for using it with many esteemed web-based projects and media solutions. - The most happening element in the new Silverlight 5 will be its support for utilizing the GPU based hardware acceleration which is intended to lower down the CPU load to a significant extent and thereby allowing faster rendering of media contents without consuming much resources. This feature is believed to be particularly helpful for low configured machines to run full HD media content without any lagging caused due to processor load. It will hence be one great feature to revolutionize the new generation high quality media contents to be available within the web in a more efficient manner with its hardware decoded video playback capabilities. - With the inclusion of hardware video decoding to minimize the processor load, the Silverlight 5 also comes with another optimization enhancement to also reduce the power consumption level by making new methods to deal with the power-saver settings. With this optimization in effect, the computer would be automatically allowed to switch to sleep mode while no video playback is in progress and also to prevent any screensavers to popup and cause annoyances during any video playback. There would also be other power saver options which will be made available to best suit the users requirements and purpose. - The Silverlight trickplay feature is another great way to tweak any silverlight powered media content as is used for many video tutorial sites or for dealing with any sort of presentations. This feature enables the user to modify the playback speed to either slowdown or speedup during the playback durations based on the requirements without compromising on the quality of output. Normally such manipulations always makes the content's audio to go off-pitch, but the same will not be the case with TrickPlay and the audio would seamlessly progress with the video without skipping any of its part. - In addition to all of the above, the new Silverlight 5 will be featuring wireless control of all the media contents by making use of remote controllers. With the use of such remote devices, it will be easier to handle the various media playback controls thereby providing more freedom while experiencing the premium media services. Silverlight 5 : Business Application Development The application development standard has been extended with more possibilities by bringing forth new and useful technologies and also reviving the existing methods to work better than what it was used to. From the UI improvements to advanced technical aspects, the Silverlight 5 scores high on all grounds to produce great next generation business delivered applications by putting in more creativity and resourceful touch to all the apps being produced with it. - The WPF feature of Silverlight is made more effective by introducing new standards of Databinding which is intended to improve the productivity standards of the Silverlight application developer. It brings in a lot of convenience in debugging the databinding components or expressions and hence making things work in a flawless manner. Some additional features related to databinding includes that of Ancestor RelativeSource, Implicit DataTemplates and Model View ViewModel (MVVM) support with DataContextChanged event and many other new features relating it. - It now comes with a refined text and printing service which facilitates better clarity of the text rendering and also many positive changes which are being applied to the layout pattern. New supports has been added to include OpenType font, multi-column text, linked-text containers and character leading support to name a few among the available features.This also includes some important printing aspects like that of Postscript Vector Printing API which allows to program our printing tasks in a user defined way and Pivot functionality for visualization concerns of informations. - The Graphics support is the key improvements being incorporated which now enables to utilize three dimensional graphics pattern using GPU acceleration. It can manage to provide some really cool visualizations being curved to provide media contents within the business apps with also the support for full HD contents at 1080p quality. - Silverlight 5 includes the support for 64-bit operating systems and relevant browsers and is also optimized to provide better performance. It can support the background thread for the networking which can reduce the latency of the network to a considerable extent. The Out-of-Browser functionality adds the support for utilizing various libraries and also the Win32 API. It also comes with testing support with VS 2010 which is mostly an automated procedure and has also enabled increased security aspects of all the Silverlight 5 developed applications by using the improved version of group policy support.

    Read the article

  • Best/Bad practices for code sharing?

    - by sunpech
    The more I explore Github, the more I like it. I really enjoy how coding is becoming more social. I'm curious as to if there are any bad practices that programmers should avoid in sharing their code with each other. And in naming bad practices, what are the best practices for code sharing? For example: Is it a bad practice for a single repo to have multiple scripts/projects named 'MiscProjects'? Where this repo, as the name suggest, is a collection of miscellaneous small scripts and projects. This may resemble how a programmer organizes projects on his/her local storage, but it's possibly not optimal for code sharing? Maybe if a good README/documentation is done, it would be better? Or as long as it's well documented, anything goes?

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Permanently redirect your asp.net pages in ASP.Net 4.0

    - by nikolaosk
    Hello all, In this post, I would like to talk about a new method of the Response object that comes with ASP.Net 4.0. The name of the method is RedirectPermanent . Let's talk a bit about 301 redirection and permanent redirection.301 redirect is the most efficient and Search Engine Friendly method for webpage redirection. Let's imagine that we have this scenario. This is a very common scenario. We have redesigned and move folders to some pages that have high search engine rankings. We do not want to...(read more)

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Windows Workflow Foundation in .NET4

    Windows Workflow Foundation (WF4) in .NET 4 is designed to make it easier for new developers to learn, addresses a wider range of customer scenarios, and is more efficient.  WF is a programming model for composing application logic and coordinating execution, allowing developers to abstract complicated code while leveraging a set of runtime services.  Activities are the building blocks that are composed together to build workflows.  The runtime provides the ability to save the state...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MySQL 5.5

    - by trond-arne.undheim
    New performance and scalability enhancements, continued Investment in MySQL (see press release). "The latest release of MySQL further exemplifies Oracle's commitment to the MySQL community and investment in delivering rapid innovation and enhancements to the MySQL platform" said Edward Screven, Oracle's Chief Corporate Architect. MySQL is integral to Oracle's complete, open and integrated strategy. The MySQL 5.5 Community Edition, which is licensed under the GNU General Public License (GPL), and is available for free download, includes InnoDB as the default storage engine. We cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. For more on Oracle's Open Source offering, see Oracle.com/opensource or oss.oracle.com (for developers).

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >