Search Results

Search found 18661 results on 747 pages for 'linq to mysql'.

Page 530/747 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • LAMP Server without single failure point + Global Server Load Balancing?

    - by José Nobile
    I want implement a LAMP Server (Linux Apache MySQL PHP) without a single failure point and with Global Server Load Balancing. I have a server in Cali, Colombia, and other server will be installed in Melbourne, Australia, user in America can use the Cali Server and in Europe, Asia, Africa or Oceania use the Melbourne Server. If any server fail (or load is excessively high), a server must answer all request. Data in MYSQL must be in sync, php files, any configuration in both server must be in sync. I read about of Google DNS Server 8.8.8.8 and 8.8.4.4 and ANY Cast, also about MySQL semisynchronous replication and MySQL Cluster, but what about other things, as crontabs, and the configurations in server? The solution can't depend of APNIC or BGP, only open source software running in Linux.

    Read the article

  • How can transfer zabbix item from different hosts and save item statistic

    - by Stepchik
    There are two server's (srv1 and srv2). Mysql server has been installed on which of them. Srv1 mysql contains database (db1). Zabbix-server get statistic throw configured agent user parameter (https://www.zabbix.com/documentation/2.0/manual/config/items/userparameters). Yesterday i has been copyed database db1 from mysql srv1 to mysql srv2. I can clone zabbix server item (https://www.zabbix.com/documentation/2.0/manual/config/items) to srv2, but lost all srv1 db1 statistic. Can you advice how keep them?

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • Add directory to $PATH if it's not already there

    - by Doug Harris
    Has anybody written a bash function to add a directory to $PATH only if it's not already there? I typically add to PATH using something like: export PATH=/usr/local/mysql/bin:$PATH If I construct my PATH in .bash_profile, then it's not read unless the session I'm in is a login session -- which isn't always true. If I construct my PATH in .bashrc, then it runs with each subshell. So if I launch a Terminal window and then run screen and then run a shell script, I get: $ echo $PATH /usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:.... I'm going to try building a bash function called add_to_path() which only adds the directory if it's not there. But, if anybody has already written (or found) such a thing, I won't spend the time on it.

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • Linux: how to restore config file using apt-get/aptitude?

    - by o_O Tync
    I've occasionally lost my config file "/etc/mysql/my.cnf", and want to restore it. The file belongs to package mysql-common which is needed for some vital functionality so I can't just purge && install it: the dependencies would be also uninstalled (or if I can ignore them temporarily, they won't be working). Is there a way to restore the config file from a package without un-ar-ing the package file? dpkg-reconfigure mysql-common did not restore it.

    Read the article

  • How to limit network usage for concrete application in linux that is running in it?

    - by B14D3
    I'm looking for something like nice for cpu, but for network usage that will limit application network consumption to level that will configure. I have problems with xapian-replicate-server that is consuming 80 % of my network. It's causing mysql connections problem (mysql server is working on this machine too). I can't move xapian or mysql to other machine so i need to limit xapian network usage to a decent level. Is there any tool that will help me do this ?

    Read the article

  • Character coding problem

    - by out_sider
    I have a file named index.php which using a mysql server gets a simple username. The mysql server is running on centOS and I have two different systems running apache serving as web servers. One is my own windows pc using a "wamp" solution which uses the mysql server refereed before and the other is the centOS server itself. I use this so I can develop in my laptop and run the final on the centOS box. The problem is this: Accessing centOS box I get (on hxxp://centos): out_sider 1lu?s 2oi Using wamp on windows I get (on hxxp://localhost): out_sider 1luís 2oi The mysql database is configured correctly seeing that both use the same and I used svn repository to move files from windows to centOS so the file is the same. Does anyone have any suggestions? Thanks in advnce

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Installation

    - by Stuart Brierley
    I have previsouly detailed the installation of MySQL, the configuration of MySQL and the installation of the ODBC Data Connector for MySQL.  The reason I needed to install and configure these servers was to provide a test environment for a BizTalk Server 2009 solution I am working on where BizTalk will be querying and populating a MySQL database. To do this I then needed to install and add the Community ODBC adapter from Two Connect: "The Community BizTalk Adapter for ODBC is based on the code that was first made available on GotDotNet a few years ago. TwoConnect has refreshed this code, added an installer, and tested it against the latest BizTalk editions. We are releasing the updates back to the BizTalk developer, user and partner community as part of our ongoing community intitiatives. This is the second adapter package that TwoConnect makes available to the community after the very succesful release of the BizTalk WSE 3 adapter a couple of years ago. This adapter is useful in all ODBC based integration scenarios. The following are the new features added and fixes made to the old code base on GotDotNet." Detailed below are the installation instructions for this adapter.  Downloading and running the installer will load up the splash screen. Next you need to select the installation location for the adapter. You then need to confirm the installation following which you will be shown the installation progress. Assuming all has gone well you should see the installation complete screen. Once the installation has completed successfully you will then need to add the adapter to your BizTalk Server.  To do this open the BizTalk Administration console, expand the Platform Settings and right click on Adapters then select New\Adapter. You should then be able to select the ODBC adapter and choose the display name for the adapter. This adapter will then be shown in the BizTalk Administration console. Next I will be looking at using the ODBC Adapter when: Generating schemas Creating a receive port Creating a send port

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • Today's Links (6/28/2011)

    - by Bob Rhubart
    Connecting People, Processes, and Content: An Online Event | Brian Dirking Dirking shares information on an Oracle Online Forum coming up on July 19. Social Relationships don't count until they count | Steve Jones "It's actually the interactions that matter to back up the social experience rather than the existence of a social link," says Jones. ORACLENERD: KScope 11: Cary Millsap Commenting on Cary Millsap's KScope presentation on Agile, Oracle ACE Chet Justice says, "I fight with methodology on a daily basis, mostly resulting in me hitting my head against the closest wall." The Sage Kings of Antiquity | Richard Veryard "Given that the empirical evidence for enterprise architecture is fairly weak, anecdotal and inconclusive, we are still more dependent than we might like on the authority of experts," says Veryard, "whether this be semi-anonymous committees (such as TOGAF) or famous consultants (such as Zachman)." Oracle Business Intelligence Blog: New BI Mobile Demos "These are short videos that showcase some of the capabilities in our mobile app," says Abhinav Agarwal. "One focuses on the Oracle BI platform, while the other showcases what is possible with the mobile app accessing Oracle Business Intelligence Applications, like Financial Analytics." MySQL HA Events in the UK, Germany & France | Oracle's MySQL Blog Oracle is running MySQL High Availability breakfast seminars in London (June 29), Düsseldorf (July 13) and Paris (September 7). "During these free seminars, we will review the various options and technologies at your disposal to implement highly available and highly scalable MySQL infrastructures, as well as best practices in terms of architectures," says Bertrand Matthelié. VENNSTER BLOG: User Experience in Fusion apps "When I heard about the Fusion Applications User Experience efforts, I was skeptical," says Oracle ACE Director Lonneke Dikmans of Vennster "My view of Oracle and User Experience has changed drastically today." Power Your Cloud with Oracle Fusion Middleware Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • SharePoint 2010 Data Retrival Techinques

    - by Jayant Sharma
    In SharePoint, we have two options to perform CRUD operation.1. using server side code2. using client side codeusing server side code, we have 1. CAML2. LINQusing client side code, we have 1. Client Object Model    1.1.      Managed Client Object Model     1.2.     Silverlight Client Object Model    1.3.     ECMA Client Object Model2. SharePoint Web Services3. ADO Data Service (based on REST Web Services)4. Using RPC Call (owssvr.dll)Which and when these options are used depend upon requirements. Every options are certain advantages and disadvantages. So, before start development of any new sharepoint project, it is important to understand the limitations of different methods.Server Object Model is used when our application is host on the same server on which sharepoint is installed. while Client Side code is used to access sharepoint from client system. In SharePoint 2010 specially Client Object Model (COM) are introduced to perform the sharepoint operations from client system. Advantage of CAML:    -  It is fast.    -  Can be use it from all kind of technology like Silverlight, or Jquery    -  You can use U2U CAML Query builder to generate CAML Query.Disadvantage Of CAML:    - Error Prone, as we can detect the error only at runtimeAdvantage of LINQ:    -  Object Oriented technique (Object Relation Model)    -  LINQ  to SharePoint provider are working with Strongly Type List Item Objects, So intellisence are present at runtime    -  No need of knowledge of CAML    -  Less Error Prone as it as it uses C# syntex.    -  You can compare two Fields of SharePoint ListDisadvantage Of LINQ:    -  List Attachment is not supported in SPMetal Tool    -  Created By, Created, Modified and Modified By Fields are not created by SPMetal Tool.    -  Custom fields are not created by SPMetal Tools    -  External Lists are not supported    -  Though at backend LINQ genenates CAML Query so it is slower than directly using CAML in Code.  Advantage of Client Object Model    -  Used to access sharepoint from client system    -  No WebServer is required at Client End    - Can use Silverlight and JavaScripts to make better and fast User experienceDisadvantage of Client Object Model    -  You cannot use RunwithEleveatedPrivilege    - Cross Site Collection query are not possible    - Lesser API's are availableADO.Net Data Services:    -  Only List based operations are possible, other type of operations are not possible.SharePoint Web Services and RPC Call:    - Previously it was used in SharePoint 2007 but after the introduction  of Client Object Model,  Microsoft recommends not to use Web Services to fetch data from SharePoint. In SharePoint 2010 it is avaliable only for backward compatibility.Ref: http://msdn.microsoft.com/en-us/library/ee539764Jayant Sharma

    Read the article

  • First ASP.NET WebForms application completed, should I jump into MVC now?

    - by farhad
    I just finished my first Asp.net intranet application using WebForms, and now I am considering learning MVC. My questions are: I mainly use LINQ for CRUD purposes instead of SQL, should I also learn hard coded SQL or just stick to LINQ EF? Is it a good idea to start learning MVC now and use it on all my future projects or is it too early for me? Do employers favour MVC over WebForms when recruiting junior developers?

    Read the article

  • Useful utilities - LINQPAD

    - by TATWORTH
    Recently I came across LINQPAD at http://www.linqpad.net/ a free utility by Joseph Alabahari. This is an excellent tool for developing and testing LINQ queries before you incorporate them into your C# programs. If you get stuck as I did at one point recently there is the MSDN Linq forum at http://forums.microsoft.com/MSDN/ShowForum.aspx?siteid=1&ForumID=123 where  you can ask for help.

    Read the article

  • The QueryExtender web server control

    - by nikolaosk
    In this post I am going to present a hands on example on how to use the QueryExtender web server control. It is built into ASP.Net 4.0 and it is available from the Toolbox in VS 2010.Before we move on with our example let me explain what this control does and why we need it. Its goal is to extend the functionality of the LINQ to Entities and LINQ to SQL datasources.Most of the times when we have data coming out from a datasource we want some sort of filtering. We do achieve that by using a Where...(read more)

    Read the article

  • Lambda&rsquo;s for .NET made easy&hellip;

    - by mbcrump
    The purpose of my blog is to explain things for a beginner to intermediate c# programmer. I’ve seen several blog post that use lambda expressions always assuming the audience is familiar with them. The purpose of this post is to make them simple and easily understood. Let’s begin with a definition. A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. So anonymous function… delegates or expression tree types? I don’t get it??? Confused yet?   Lets break this into a few definitions and jump right into the code. anonymous function – is an "inline" statement or expression that can be used wherever a delegate type is expected. delegate - is a type that references a method. Once a delegate is assigned a method, it behaves exactly like that method. The delegate method can be used like any other method, with parameters and a return value. Expression trees - represent code in a tree-like data structure, where each node is an expression, for example, a method call or a binary operation such as x < y.   Don’t worry if this still sounds confusing, lets jump right into the code with a simple 3 line program. We are going to use a Function Delegate (all you need to remember is that this delegate returns a value.) Lambda expressions are used most commonly with the Func and Action delegates, so you will see an example of both of these. Lambda Expression 3 lines. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Func<int, int> myfunc = x => x *x;             Console.WriteLine(myfunc(6).ToString());             Console.ReadLine();         }       } } Is equivalent to Old way of doing it. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {               Console.WriteLine(myFunc(6).ToString());             Console.ReadLine();         }            static int myFunc(int x)          {              return x * x;            }       } } In the example, there is a single parameter, x, and the expression is x*x. I’m going to stop here to make sure you are still with me. A lambda expression is an unnamed method written in place of a delegate instance. In other words, the compiler converts the lambda expression to either a : A delegate instance An expression tree All lambda have the following form: (parameters) => expression or statement block Now look back to the ones we have created. It should start to sink in. Don’t get stuck on the => form, use it as an identifier of a lambda. A Lamba expression can also be written in the following form: Lambda Expression. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Func<int, int> myFunc = x =>             {                 return x * x;             };               Console.WriteLine(myFunc(6).ToString());             Console.ReadLine();         }       } } This form may be easier to read but consumes more space. Lets try an Action delegate – this delegate does not return a value. Action Delegate example. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             Action<string> myAction = (string x) => { Console.WriteLine(x); };             myAction("michael has made this so easy");                                   Console.ReadLine();         }       } } Lambdas can also capture outer variables (such as the example below) A lambda expression can reference the local variables and parameters of the method in which it’s defined. Outer variables referenced by a lambda expression are called captured variables. Capturing Outer Variables using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             string mike = "Michael";             Action<string> myAction = (string x) => {                 Console.WriteLine("{0}{1}", mike, x);          };             myAction(" has made this so easy");                                   Console.ReadLine();         }       } } Lamba’s can also with a strongly typed list to loop through a collection.   Used w a strongly typed list. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace ConsoleApplication7 {     class Program     {          static void Main(string[] args)         {             List<string> list = new List<string>() { "1", "2", "3", "4" };             list.ForEach(s => Console.WriteLine(s));             Console.ReadLine();         }       } } Outputs: 1 2 3 4 I think this will get you started with Lambda’s, as always consult the MSDN documentation for more information. Still confused? Hopefully you are not.

    Read the article

  • Good PHP books for starters, any recommendations?

    - by Goma
    I started reading some PHP books. Most of them in their introduction say that this book , unlike other books, it follows a good habits and practices. Now, I do not know which book tells the truth, and which writer is the most experienced in PHP. These are the books that I had a quick look to their first chapter: PHP and MySQL Web Development (Developer's Library) by Luke Welling and Laura Thomson. Build Your Own Database Driven Web Site Using PHP & MySQL by Kevin Yank. PHP and MySQL for Dummies by Janet Valade. Now, it's your time to advise me and tell me about the excellent one that follows best practices, please give an advice from your experience. (It could be any other book!). Regards,

    Read the article

  • Good PHP BOOKS for starters!

    - by Goma
    I started reading some PHP books. Most of them in their introduction say that this book , unlike other books, it follows a good habits and practices. Now, I do not know which book tells the truth, and which writer is the most experienced in PHP. These are the books that I had a quick look to their first chapter: PHP and MySQL Web Development (Developer's Library) by Luke Welling and Laura Thomson. Build Your Own Database Driven Web Site Using PHP & MySQL by Kevin Yank. PHP and MySQL for Dummies by Janet Valade. Now, it's your time to advise me and tell me about the excellent one that follows best practices, please give an advice from your experience. (It could be any other book!). Regards,

    Read the article

  • Setup CRON weekly backup

    - by sadmicrowave
    I want to make a backup of my /var/lib/mysql and /var/www folders and save them as tar.gz files to my mounted network file server (uslons001). Here is my bash file located in: /etc/cron.weekly/mysqlbackup.sh #!/bin/bash mkdir ~/uslons001/`date +%d%m%y` tar -czf ~/uslons001/`date +%d%m%y`/mysql.tar.gz /var/lib/mysql tar -czf ~/uslons001/`date +%d%m%y`/www.tar.gz /var/www tar -czf ~/uslons001/`date +%d%m%y`.tar.gz ~/uslons001/`date +%d%m%y` echo Backup Completed `date` >> ~/backuplog Which works PERFECTLY fine when I execute it in a cmd shell but when I setup the cron job it never runs, so I'm not setting the cron job up properly. My cron job looks like this. 30 7 * * fri /etc/cron.weekly/mysqlbackup.sh Which should execute at 7:30AM every Friday... What am I doing wrong? UPDATE1 - change the cron job line to the following: 44 8 * * 5 /etc/cron.weekly/mysqlbackup.sh with still no luck...is there a cron error log file that I can read to help pin point where the problem is?

    Read the article

  • Is it better to use a Database or a data structure for network stack?

    - by poly
    I've built a multi threaded messaging application in C and I'm currently using a MySQL Memory table to save the session ID, but I'm not sure whether this was a good decision or not. It works like this, the application sends a message and saves the source session ID in the MySQL table. When the application gets the success response it will remove the session's ID from the MySQL table, or if it received an error response then it will keep the ID to be retried later. I've built it this way so that I don't need to care about building a data structure by myself, and the Database provides flexibility when it comes to querying it. Do you think this is appropriate or do I need to use something else? Please note that the application is expecting to handle a large number of transactions/sec.

    Read the article

  • CodePlex Daily Summary for Thursday, April 01, 2010

    CodePlex Daily Summary for Thursday, April 01, 2010New ProjectsASP.NET Bing Maps: Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map style, add a PushPin...Bricks' Bane: Bricks' Bane is a brick breaker game developed using XNA and published on XBox Live Indy Games. Source code includes a C# library useful for game d...cURL for dotnet: Another dotnet binding for libcurl see http://curl.haxx.se for more info about cURL/libcurlCustom Functoid que acessa o banco de dados SQL: Functoid para Biztalk Server 2006 utilizando dados do SQL Server 2005FEI STU Pharmacy e-shop: Elektronicky obchod s liekmi Vytvorte jednoduchú klient-server aplikáciu, ktorá bude realizovať elektronický obchod s liekmi. Moduly: 1. e-shop f...Flavours of Wix: Investigating building DSL's to create installers based on WIXFulcrum: Fulcrum is a code generation framework built on top of the T4 technology in Visual Studio. GreviousAngel: New team projectHabanero Inferno: Habanero Inferno coming soon.Kawo Pounga !: A useless game !!!LetsXNA!!: This is a project created by members of Linked In group Lets XNA!! to build a XNA game and have fun in the process. The goal is to build a simple ...Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: <project name>Linq to Naver </project name> <programming language>C#, CSharp</programming language>LocoSync: LocoSync is a file Syncronization/Backup/Archiver program, which is easily extendable. It is easy to add new syncronization methods using C# code.Natural Language Processing: Natural Language ProcessingNop Commerce Azure: Ce projet vous permet de mettre en place rapidement et simplement votre site d'e-commerce en ligne en bénéficiant de tous les avantages de la plate...Nwinsock: Nwinsock is a component for network , Object Transfer, Pocket Compression, Support TCP,UDP Protocol, Thread Base OnTime: OnTime is a simple program from that matching game back in the day just to bring light to programming techniques. It's developed in C#.?OpenGL ES 2.0 Compact Framework Wrapper: OpenGL ES 2.0 wrapper for .NET Compact Framework. Developed on HTC HD 2 device but should run on any Windows Mobile device that has the correct lib...ortaknokta: bu proje: birkaç kişinin bir araya gelip, istedikleri konularda tartışma yapmalarına olanak saglamak icin hazırlanmaya çalışıl maktadır. P-Data: P-Data es una herramienta que permite obtener información procedente de archivos de datos (Data Profiling) a través de consultas SQL, automatizando...PowerAuras: Addon for World of Warcraft - Displays effects on screen at different conditionsPowerShell ToodleDo Module: PowerShell Module for interacting with toodledo.com online To-Do list site. RSS Reader for Windows Phone 7: This RSS Reader application for windows 7Streamlet Containers: This is my implement of STL-style containers, including a dynamic array, a double-linked list and an r-b-tree. Just for practice. Please feel free...Troav: Social encyclopedia built using c# and the Orchard frameworkUmbraco App_Code/Usercontrol Editor: Package for Umbraco to add App_Code and usercontrol editing to the Developer section of the Umbraco administration system. Will support GeSHi editi...Vczh Reactive Programming Library: Reactive programming library provide a stream or state machine view to use .NET eventsWhoIs XML API: The project uses the public WhoIs XML API service (http://www.whoisxmlapi.com/) to obtain detailed details. The project is written in C# and serial...WPF FlowDocument Examples for VS2008 and VS2010: WPF Text Samples (especially FlowDocument) on the various possible effects: sub- and super-script, ruby (a.k.a. furigana), and various others...You are here (for Windows Mobile): This sample shows you how to play a *.wav file on your device with Compact Framework 2.0. There is better support for playing music on Compact F...New Releases( λunula ): Lunula 0.4.0: Changelog Implemented a virtual machine. Implemented a compiler for the virtual machine. Added first-class continuations (call/cc) Removed co...Alter gear SQL index Management: Setup 1.0.1: Changes Test connection - successful message Connection string timeout property added Setup Project added to project source code Possible issu...ASP.NET Bing Maps: ASP.NET Bing Maps 0.1b: Project Description Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map ...ASP.NET MVC Validation Library: ASP.NET MVC Validation Library 1.3: Changes since 1.2: - Support remote validation - Support custom server-side validation - The design of validation attribute is improved Note: test...BigDays 2010: HelfenHelfen - v1: PLEASE NOTE: This project is published under the Microsoft Public License (Ms-PL). http://bigdays10.codeplex.com/license IT IS A DEMO SOLUTION FOR...Caps - Manage your collection!: Caps Console 0.1.4.0 Alpha: This is preview release (Alpha quality). This release contains only limited amount of fixes and new features from user point of view. Major focus f...CSharpQuery: Version 1.0: This version is stable. Please report any possible bugs. The next release will include a sample project and index management tools. Until then pl...Custom Functoid que acessa o banco de dados SQL: Custom Functoid SQL Server: Solução do Visual Studio com código fonte e script SQL do functoid em BiztalkDawf: Dual Audio Workflow: Beta 3: Suppose if two good audio events overlap in time with a videoevent of interest. (This can only happen if PluralEyes isn't used on everything). Befo...Dirac codec user interface: Dirac User Interface (checkin 37132): Same as 36795 version, but done with the last source code.DotNetNuke® Blog: 04.00.00 RC 3: PLEASE NOTE: You may upgrade an RC 2 install. But please do not upgrade previous version of the Beta releases - please start from RC 2 or 03.05.0...DotNetNuke® Skinning Extensions: SimpleTitle Skin Object: This is an example skin object that only renders the "page name" if used in a skin and the "module title" if used in a container. No extra spans, c...Fulcrum: Fulcrum v0.9: Initial release of FulcrumHelloTipi Photos Uploader: Version 2010.03.31: De toute petites corrections : - Correction du bouton envoyer - Impossible d'interagir avec l'application quand on uploadkdar: KDAR 0.0.18: KDAR - Kernel Debugger Anti Rootkit - dispacth table's signature bases updated ( many driver's) - scripts refactored - some bug fixedLegend: Legend Libraries: The latest release.Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: Linq to Naver: Linq to NaverLive at Education Meta Web-Service: Live at Education Meta Web Service v. 1.0: We're happy to publish final version of Live at Education Meta Web Serivce (LAEMWS). In this release: Huge list of Windows Live ID enabled servic...Live@edu SSO WebPart for MOSS 2007: WebPart 2.0: This release is based on Live@edu Meta Web Service (laemws - http://laemws.codeplex.com). It is highly recomended to use laemws version of webpart,...LocoSync: LocoSync v0.1r2010.03.31 installer: This is the first public release. Unzip and run setup. Or if you have .net 3.5 runtime available download the exetutable and try...Natural Language Processing: test1: testNop Commerce Azure: Nop Commerce Azure: Nop Commerce Full Sources with additionnals Azure Projects.Nwinsock: NWinsock: Nwinsock version 1.0 is hereOpen NFe: DANFe v1.9.8: Correção CSTOpenGL ES 2.0 Compact Framework Wrapper: v0.1 Sources: First rough release. It has a working sample application which renders a triangle with rotation. Don't expect anything great. Just a very early ...patterns & practices - Windows Azure Guidance: Code Drop 3: Second iteration of a-Expense on Azure. This release builds on the previous one and mainly focuses on replacing SQL Azure by Table Storage. We hav...Posh4DNN: Posh4DNN Scripts 2.0: This release greatly increases the speed of installation and incorporates the use of IIS and SQL Server Snap-ins for managing those services. Inst...Process Enactment Tool Framework: PET 1.1: PET Core new intermediate model with arbitrary "clean" relations among objects and several updates of the object fields (see DependencyInterfacesA...Project Tru Tiên: Elements-test V1-fix (v1): Là Elements-test V1 đã được fix các vấn đề sau: - Fix lỗi hiển thị thú cưỡi Hổ Kỳ Lân - Fix hiển thị tab tiếng trung --> sang tiếng việt - Fix hiể...Sentinel - Log Viewer: Sentinel 0.8.1 (nLog support): Build of the 0.8.1 code (svn revision 36823) which included support for both nLog and log4net that has been in SVN for a while but didn't have a bi...sgMotion Animation Library: SgMotion v1.1 (For Sunburn 1.3.1): SgMotion v1.1 (For Sunburn 1.3.1) This release includes both a Windows & Xbox sample. The sample is set to default at Forward rendering, but can e...sTASKedit: sTASKedit 44538 (Developer Alpha): + nearly all fields are viewed in this release for task verification and identifying of unknownsTest Project (ignore): asdf asdf asdf asdf asdf asdf asdf sadf sdf asdf a: ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ...Test Project (ignore): cdscs: csdcacacTroav: Traov20100331 Source Pre-Alpah: This is some experiements with implementing custom modules with Microsoft's Orchard frame work. This is very preliminary, and subject to change.Weather Report WebControls: WebWeatherReport: 主要文件的源代码WhoIs XML API: Initial Release: Initial ReleaseYou are here (for Windows Mobile): CAB file and Source Code: You can find more Controls and samples for Windows Mobile developers at: http://www.beemobile4.netMost Popular ProjectshmrEngineRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsRawrGraffiti CMSBase Class LibrariesjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationN2 CMSLINQ to TwitterManaged Extensibility FrameworkFarseer Physics Engine

    Read the article

  • Difference between Factory Method and Abstract Factory design patterns using C#.Net

    - by nijhawan.saurabh
    First of all I'll just put both these patterns in context and describe their intent as in the GOF book: Factory Method: Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.   Abstract Factory: Provide an interface for creating families of related or dependent objects without specifying their concrete classes.   Points to note:   Abstract factory pattern adds a layer of abstraction to the factory method pattern. The type of factory is not known to the client at compile time, this information is passed to the client at runtime (How it is passed is again dependent on the system, you may store this information in configuration files and the client can read it on execution). While implementing Abstract factory pattern, the factory classes can have multiple factory methods. In Abstract factory, a factory is capable of creating more than one type of product (Simpilar products are grouped together in a factory)   Sample implementation of factory method pattern   Let's see the class diagram first:                   ProductFactory.cs // ----------------------------------------------------------------------- // <copyright file="ProductFactory.cs" company=""> // TODO: Update copyright text. // </copyright> // -----------------------------------------------------------------------   namespace FactoryMethod {     using System;     using System.Collections.Generic;     using System.Linq;     using System.Text;       /// <summary>     /// TODO: Update summary.     /// </summary>     public abstract class ProductFactory     {         /// <summary>         /// </summary>         /// <returns>         /// </returns>         public abstract Product CreateProductInstance();     } }     ProductAFactory.cs // ----------------------------------------------------------------------- // <copyright file="ProductAFactory.cs" company=""> // TODO: Update copyright text. // </copyright> // -----------------------------------------------------------------------   namespace FactoryMethod {     using System;     using System.Collections.Generic;     using System.Linq;     using System.Text;       /// <summary>     /// TODO: Update summary.     /// </summary>     public class ProductAFactory:ProductFactory     {         public override Product CreateProductInstance()         {             return new ProductA();         }     } }         // ----------------------------------------------------------------------- // <copyright file="ProductBFactory.cs" company=""> // TODO: Update copyright text. // </copyright> // -----------------------------------------------------------------------   namespace FactoryMethod {     using System;     using System.Collections.Generic;     using System.Linq;     using System.Text;       /// <summary>     /// TODO: Update summary.     /// </summary>     public class ProductBFactory:ProductFactory     {         public override Product CreateProductInstance()         {             return new ProductB();           }     } }     // ----------------------------------------------------------------------- // <copyright file="Product.cs" company=""> // TODO: Update copyright text. // </copyright> // -----------------------------------------------------------------------   namespace FactoryMethod {     using System;     using System.Collections.Generic;     using System.Linq;     using System.Text;       /// <summary>     /// TODO: Update summary.     /// </summary>     public abstract class Product     {         public abstract string Name { get; set; }     } }     // ----------------------------------------------------------------------- // <copyright file="ProductA.cs" company=""> // TODO: Update copyright text. // </copyright> // -----------------------------------------------------------------------   namespace FactoryMethod {     using System;     using System.Collections.Generic;     using System.Linq;     using System.Text;       /// <summary>     /// TODO: Update summary.     /// </summary>     public class ProductA:Product     {         public ProductA()         {               Name = "ProductA";         }           public override string Name { get; set; }     } }       // ----------------------------------------------------------------------- // <copyright file="ProductB.cs" company=""> // TODO: Update copyright text. // </copyright> // -----------------------------------------------------------------------   namespace FactoryMethod {     using System;     using System.Collections.Generic;     using System.Linq;     using System.Text;       /// <summary>     /// TODO: Update summary.     /// </summary>     public class ProductB:Product     {          public ProductB()         {               Name = "ProductA";         }         public override string Name { get; set; }     } }     Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace FactoryMethod {     class Program     {         static void Main(string[] args)         {             ProductFactory pf = new ProductAFactory();               Product product = pf.CreateProductInstance();             Console.WriteLine(product.Name);         }     } }       Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >