Search Results

Search found 441 results on 18 pages for 'duplication'.

Page 7/18 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How to avoid index.php in Zend Framework?

    - by henriquev
    I'm using the Zend Router and so things like (/ and /index.php) or (/about and /index.php/about) ends up as the same here. However, /index.php/whatever should not exist as it is the exactly same resource as /whatever so it doesn't make a sense the duplication. How do I avoid this? Even http://zendframework.com/manual/en/zend.controller.router.html and http://zendframework.com/index.php/manual/en/zend.controller.router.html both exists. It doesn't make any sense at all...

    Read the article

  • De-normalization for the sake of reports - Good or Bad?

    - by Travis
    What are the pros/cons of de-normalizing an enterprise application database because it will make writing reports easier? Pro - designing reports in SSRS will probably be "easier" since no joins will be necessary. Con - developing/maintaining the app to handle de-normalized data will become more difficult due to duplication of data and synchronization. Others?

    Read the article

  • jQuery Templates vs Partial Views in ASP.NET MVC

    - by Jaco Pretorius
    I'm taking a look at jQuery templates. It looks really interesting - easy syntax, easy to use, very clean. However, I can't really see why it's better to use jQuery templates instead of simply fetching partial views via AJAX. It simply seems like the partial views would be much easier to maintain and helps to avoid duplication of code. I want to use jQuery templates. But when would it be better than partial views?

    Read the article

  • Java replacement for C macros

    - by thkala
    Recently I refactored the code of a 3rd party hash function from C++ to C. The process was relatively painless, with only a few changes of note. Now I want to write the same function in Java and I came upon a slight issue. In the C/C++ code there is a C preprocessor macro that takes a few integer variables names as arguments and performs a bunch of bitwise operations with their contents and a few constants. That macro is used in several different places, therefore its presence avoids a fair bit of code duplication. In Java, however, there is no equivalent for the C preprocessor. There is also no way to affect any basic type passed as an argument to a method - even autoboxing produces immutable objects. Coupled with the fact that Java methods return a single value, I can't seem to find a simple way to rewrite the macro. Avenues that I considered: Expand the macro by hand everywhere: It would work, but the code duplication could make things interesting in the long run. Write a method that returns an array: This would also work, but it would repeatedly result into code like this: long tmp[] = bitops(k, l, m, x, y, z); k = tmp[0]; l = tmp[1]; m = tmp[2]; x = tmp[3]; y = tmp[4]; z = tmp[5]; Write a method that takes an array as an argument: This would mean that all variable names would be reduced to array element references - it would be rather hard to keep track of which index corresponds to which variable. Create a separate class e.g. State with public fields of the appropriate type and use that as an argument to a method: This is my current solution. It allows the method to alter the variables, while still keeping their names. It has the disadvantage, however, that the State class will get more and more complex, as more macros and variables are added, in order to avoid copying values back and forth among different State objects. How would you rewrite such a C macro in Java? Is there a more appropriate way to deal with this, using the facilities provided by the standard Java 6 Development Kit (i.e. without 3rd party libraries or a separate preprocessor)?

    Read the article

  • Using SQL Server Views with NHibernate

    - by colinramsay
    I have a site that sells cars. On the frontend, I want to only show cars that are published, and on the backend I want to show all cars. Whether a car is published or not depends on a number of factors, so I wanted to create a view to simplify this. My question is, can I reduce duplication by dynamically telling NHibernate to sometimes use the "PublishedCar" view and something use the "AllCar" view when querying/fetching Car entities?

    Read the article

  • What's wrong with foreign keys?

    - by kronoz
    I remember hearing Joel mention in the podcast that he'd barely ever used a foreign key (if I remember correctly). However, to me they seem pretty vital to avoid duplication and subsequent data integrity problems throughout your database. Do people have some solid reasons as to why (to avoid a discussion in lines with SO principals)? Edit: "I've yet to have a reason to create a foreign key, so this might be my first reason to actually set up one."

    Read the article

  • Repository Pattern : Add Item

    - by No Body
    Just need to clarify this one, If I have the below interface public interface IRepository<T> { T Add(T entity); } when implementing it, does checking for duplication if entity is already existing before persist it is still a job of the Repository, or it should handle some where else?

    Read the article

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

  • SQL Server backup

    - by zzz777
    I have Full-Backup-A Transaction-Log-Backup-A Transaction-Log-Backup-B (*) - I have to restore this point Full-Backup-B How to do it? It seems that the only way is Full-Backup-A Transaction-Log-Backup-A Transaction-Log-Backup-B Shut-off client access Transaction-Log-C Full-Backup-B Allow client access Are there any other ways to guarantee that nothing did happen with the database between last transaction log and the next full backup. I was thinking about a. Starting transaction log backup simultaneously with full backup. b. Using differential back up while clients are connected and making full backup during maintenance window only c. Run replication and back-up the replica, stopping and restoring duplication services in points 4 and 7 and feel that it is actually hopeless.

    Read the article

  • What is the best backup solution for VMware Infrastructure system that hosts a wide variety of VMs?

    - by SBWorks
    In a situation where you are running: VMware Infrastructure 4.x with multiple hosts Over 150 VMs with a wide variety of operating systems (Linux in a half dozen distros, Solaris, every MS version, etc.) in multiple languages with almost every mix of installed software (luckily, no Exchange mail servers) Using an EMC fiber channel SAN The VWs that need need to be backed up use about 2 terabytes of data (total) The goal is to keep backups for about 3-months At this rough scale, what backup solutions have worked well for you? And, as an add-on question, did any of them have de-duplication that you thought was effective and useful?

    Read the article

  • Eliminating Duplicate Printers on Print Server 2008 R2

    - by user123247
    I added a print server role to our new 2008 R2 server and started adding printers to it that will be available to Remote Desktop sessions. When I added the Remote Desktop services role, I specified printer redirection, thinking that would be a good thing. On the PCs where I am testing all this, I added the network printers locally so that they would have the printer available for local use. When I logon to the 2008 R2 server, I notice that the printers I added are out there twice... once on the 2008 R2 server and an additional time redirected from my PC. Is there some way to eliminate this duplication w/o eliminating redirection?

    Read the article

  • How to change the $PATH in Mac OSX

    - by Samuel Elgozi
    I've installed git via the instaler and not with terminal with commands, and my $PATH changed, the path to the 'local' git was added the the end of the variable, and my $PATH changed to this: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin However, that doesnt help me, because i the path to Xcode's git comes first. so what I've done is the next, I added this lines to my '.bash_profile': export PATH="/usr/local/git/bin:$PATH" and now my path is the next: /usr/local/git/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin And I wanted to know how to remove the duplication from the end of the Path so I end up with: /usr/local/git/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin Thanks Ahead! And sorry if my english is too bad..

    Read the article

  • Alternative to Windows Home Server (WHS) backups

    - by Adam Tegen
    Since Microsoft announced the end of life for WHS, are there any alternatives? Specifically, I am interested in recovering from a catastrophic disk failure with WHS. For example, this is my ideal scenario when a desktop hard-drive fails (has a bad virus, etc): Install a disk of the same size or greater Boot the desktop with the Recovery Disc Point the recovery application at the WHS Pick the machine, the drive(s) and the date of the backup Have a couple beers Reboot to a working machine as if nothing happened. I would need to slap multiple disks in the machine without raid. It sounds like LVM will work here. It would be nice, but not required to have de-duplication of files when multiple machines are backed up. (Single Instance Storage)

    Read the article

  • .htaccess https redirect best method

    - by Douglas Cottrell
    I have searched through all the redirects posted buy others and cant quite find the answer to my problem. I have a website with over 3000 pages and we are getting duplication issues within google. We want to keep everything in the parent directory to be http except our contact.php and login.php page. We then have 3 folders that must be secured. admin, clients, customers I have tried using the following code in seperate .htaccess files for each folder, but I keep getting a conflict when I try and I am still trying to find a good solution for the home directory. RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} admin RewriteRule ^(.*)$ https://www.website.com/$1 [R,L] Any help would be greatly appreciated.

    Read the article

  • Feedback on available mid-to-enterprise level desktop backup solutions [closed]

    - by user85610
    I am involved in the creation of a new backup solution to replace our current Retrospect setup, which has become a significant time sink to administer. We have almost 200 desktop and some laptop clients, both Windows and OS X. We're only interested in products oriented around disk-to-disk, and would be integrate well with our current set of nine NAS devices as target storage. I'd just like some feedback from anyone out there, as it's sometimes difficult otherwise to find objective reviews of software at this level. Both data and time are important enough that we need a reliable solution which won't be prone to self-destruction as often as Retrospect. Bonus points for de-duplication, which might help squeeze more service time out of our NAS setup in terms of capacity. Currently considering Commvault and Netbackup. Many other products I've seen don't have an OSX client. Any thoughts?

    Read the article

  • Two SSL certs for a domain in DirectAdmin

    - by Bart van Heukelom
    If I were to get 2 SSL certificates, one for example.com and one for www.example.com, is there a way to install them both on the site example.com in DirectAdmin? The default interface only allows installing one for both versions. If not, can I separate the 2 domains into 2 sites? One of them would only be a redirection, so there wouldn't be any duplication of site files. (Please don't answer with "one certificate should work for both". It doesn't always. This is a DirectAdmin question)

    Read the article

  • HDMI and DVI connected monitors not in dual view on Windows 7 (Intel onboard Gigabyte)

    - by Nux
    I have a HDMI to DVI cable which is connected to one monitor (HDMI is on PC side) and DVI-DVI connected to other monitor. This seems to work up until the system is fully loaded (the startup logo animation is shown on two screens). When and after I log in only one monitor is active (HDMI-DVI). I'm also unable to detect the other monitor (DVI-DVI). Any ideas what might be wrong with the setup? Also when I uninstall the Intel Graphics driver both screens are on, but only duplication works. GA-Z68A-D3H-B3, Intel Z68 Chipset, with fresh VGA driver (15.26.1.64.2618).

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • MS Windows issue - "Filename or extension is too long"

    - by Daniel
    I run Microsoft windows on a few of my machines. I don't know if many people know about this issue in the OS but you can't have very long filenames, from what I know Linux can have longer names, I have never run into this issue on my Linux machines. Anyway I run into issues whenever copying folders & files to backup drives. I manually backup of my data, finding and changing names of files, this is very very tedious. Is there a software tool to shorten folders or filenames that are found to be to long on Windows? I have drive image duplication software which does the job but in a way that I don't like, plus moving files can become a hassle at times if the names are too long to copy.

    Read the article

  • rsync to ONLY keep files in destination that have been removed from source

    - by David Corley
    We use rsync to copy filesystem contents from one machine to another as a backup. We first run MACHINE-X-MACHINE-Y rsync for a straight backup with the --delete and --delete-excluded switches We also run an internal Rsync between the MACHINE-Y destination, and another folder on MACHINE-Y with either of the delete flags. This maintains a non-destructive copy in the event someone inadvertently deletes a file on MACHINE-X. However, it also has the overhead of being a complete copy of what has already been synchronized. Ideally I want to be able to run the non-destructive rsync in such a way that the destination ONLY receives the deleted files and so avoids unnecessary duplication . Is there any way to do this?

    Read the article

  • How can I best share Ant targets between projects?

    - by Rob Hruska
    Is there a well-established way to share Ant targets between projects? I have a solution currently, but it's a bit inelegant. Here's what I'm doing so far. I've got a file called ivy-tasks.xml hosted on a server on our network. This file contains, among other targets, boilerplate tasks for managing project dependencies with Ivy. For example: <project name="ant-ivy-tasks" default="init-ivy" xmlns:ivy="antlib:org.apache.ivy.ant"> ... <target name="ivy-download" unless="skip.ivy.download"> <mkdir dir="${ivy.jar.dir}"/> <echo message="Installing ivy..."/> <get src="http://repo1.maven.org/maven2/org/apache/ivy/ivy/${ivy.install.version}/ivy-${ivy.install.version}.jar" dest="${ivy.jar.file}" usetimestamp="true"/> </target> <target name="ivy-init" depends="ivy-download" description="-> Defines ivy tasks and loads global settings"> <path id="ivy.lib.path"> <fileset dir="${ivy.jar.dir}" includes="*.jar"/> </path> <taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/> <ivy:settings url="http://myserver/ivy/settings/ivysettings-user.xml"/> </target> ... </project> The reason this file is hosted is because I don't want to: Check the file into every project that needs it - this will result in duplication, making maintaining the targets harder. Have my build.xml depend on checking out a project from source control - this will make the build have more XML at the top-level just to access the file. What I do with this file in my projects' build.xmls is along the lines of: <property name="download.dir" location="download"/> <mkdir dir="${download.dir}"/> <echo message="Downloading import files to ${download.dir}"/> <get src="http://myserver/ivy/ivy-tasks.xml" dest="${download.dir}/ivy-tasks.xml" usetimestamp="true"/> <import file="${download.dir}/ivy-tasks.xml"/> The "dirty" part about this is that I have to do the above steps outside of a target, because the import task must be at the top-level. Plus, I still have to include this XML in all of the build.xml files that need it (i.e. there's still some amount of duplication). On top of that, there might be additional situations where I might have common (non-Ivy) tasks that I'd like imported. If I were to provide these tasks using Ivy's dependency management I'd still have problems, since by the time I'd have resolved the dependencies I would have to be inside of a target in my build.xml, and unable to import (due to the constraint mentioned above). Is there a better solution for what I'm trying to accomplish?

    Read the article

  • Razor – Hiding a Section in a Layout

    - by João Angelo
    Layouts in Razor allow you to define placeholders named sections where content pages may insert custom content much like the ContentPlaceHolder available in ASPX master pages. When you define a section in a Razor layout it’s possible to specify if the section must be defined in every content page using the layout or if its definition is optional allowing a page not to provide any content for that section. For the latter case, it’s also possible using the IsSectionDefined method to render default content when a page does not define the section. However if you ever require to hide a given section from all pages based on some runtime condition you might be tempted to conditionally define it in the layout much like in the following code snippet. if(condition) { @RenderSection("ConditionalSection", false) } With this code you’ll hit an error as soon as any content page provides content for the section which makes sense since if a page inherits a layout then it should only define sections that are also defined in it. To workaround this scenario you have a couple of options. Make the given section optional with and move the condition that enables or disables it to every content page. This leads to code duplication and future pages may forget to only define the section based on that same condition. The other option is to conditionally define the section in the layout page using the following hack: @{ if(condition) { @RenderSection("ConditionalSection", false) } else { RenderSection("ConditionalSection", false).WriteTo(TextWriter.Null); } } Hack inspired by a recent stackoverflow question.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >