Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 117/471 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Key Techniques For Search Engine Optimization in 2010

    The art of creating web pages which will rank high in search engine returns is called Search Engine Optimization or SEO. By optimizing certain elements or sections in the HTML code of each page, SEO can be accomplished. The search engines specifically read these sections. The level of optimization can help determine the amount free referral traffic.

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Friday Fun: Shape Fold

    - by Asian Angel
    This week’s game comes with lots of puzzle solving, brain-teasing goodness to keep you busy. On each level you will need to rotate, twist, and/or move the hinged puzzle pieces into their proper shape. Do you have the patience and skill to solve all the puzzles or will you be forced to admit defeat? 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • GDD-BR 2010 [1D] Tim Bray - Android Ecosystem and What's New

    GDD-BR 2010 [1D] Tim Bray - Android Ecosystem and What's New Speaker: Tim Bray Track: Android Time slot: D[13:50 - 14:35] Room: 1 Level: 101 This talk combines an introduction to the Android ecosystem with a description of what's new in it. The ecosystem includes the technology, developer community, Android Market, and of course the huge population of Android users. From: GoogleDevelopers Views: 25 1 ratings Time: 41:40 More in Science & Technology

    Read the article

  • Making sense of the Game State manager tutorial?

    - by Johnny Quest
    I have come across the Game State Managemnet tutorial at http://create.msdn.com/en-US/education/catalog/sample/game_state_management because I thought this would be a good place to start a game off. I have added a new screen, but I am still a bit lost on how everything works. When I make my game, do I only need one more additional screen? just for gameplay? or should I have a different screen for each level?

    Read the article

  • SQL Server v.Next (Denali) : More on contained databases and "contained users"

    - by AaronBertrand
    One of the reasons for contained databases (see my previous post ) is to allow for a more seamless transition when moving a database from one server to another. One of the biggest complications in doing so is making sure that all of the logins are in place on the new server. Contained databases help solve this issue by creating a new type of user: a database-level user with a password. I want to stress that this is not the same concept as a user without a login , which serves a completely different...(read more)

    Read the article

  • When to favor ASP.NET WebForms over MVC

    - by P.Brian.Mackey
    I know Microsoft has said "ASP.NET MVC is not a replacement for WebForms". Some developers say WebForms is faster to develop than MVC, but I believe this all comes down to comfort level with the technology; so I don't want any answers in this direction. Given that ASP.NET MVC gives a developer more control over our application, why is WebForms not considered obsolete? When should I favor WebForms over MVC for new development?

    Read the article

  • How To Use Regular Expressions for Data Validation and Cleanup

    You need to provide data validation at the server level for complex strings like phone numbers, email addresses, etc. You may also need to do data cleanup / standardization before moving it from source to target. Although SQL Server provides a fair number of string functions, the code developed with these built-in functions can become complex and hard to maintain or reuse. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Is hiring a "chief intern" a good idea?

    - by dukeofgaming
    I'm starting an internship program for our software department and I was wondering about creating a position ("chief intern", intern supervisor, or whatever one should call it) with the following responsibilities: Train interns Coach interns Manage projects and tasks for interns Supervise intern's work in terms of rhythm and quality Act as a liaison between the main team's needs and interns performance/aspirations Evaluate and facilitate intern's progress when they want to grab a higher-level domain-specific task (at this point, a main dev team member can do mentoring) Get freely involved in the main team's software development tasks so that he himself can grow, and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: He/she decides to move on to the main dev team by recommending an appropriate replacement (or me finding another one as a new hire) Keep leading the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a "chief intern" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, this could help interns that aspire this position emerge as leaders. My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention. Is this "chief intern" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? Edit: I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? Edit #2: My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. Edit #3: I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns.

    Read the article

  • Engagement: Don’t Forget Your Employees!

    - by Kellsey Ruppel
    By Mark Brown, Sr. Director, Oracle WebCenter  This week we want to focus on Employee Engagement, and how it is critical to your business. Today we hear and read a great deal about “Customer Engagement” – and rightly so, it is those customers, whether they be traditional paying customers, citizens, students, club members, or whomever it is that are “paying the bills”.  A more engaged customer is more likely to make it easier to pay those bills by buying more, giving good reviews, or spreading the word of how wonderful their experience was. But what about those who are providing those services, those who design and make those goods; why is it that all too often they are left out of conversations concerning engagement? In fact, it is critical that we consider our employees as customers since they are using internal systems that run your organization the same way customers use external systems. Studies have shown that an organization in which the employees feel “engaged” or better able to make decisions, do their jobs, and are connected to their peers have better return to their stakeholders. (shareholders).  On the surface this seems obvious, happy employees are more productive employees. But it leads to the question – how many of our existing policies, systems and processes are actually reducing that level of engagement? Let’s look at a couple examples. If posting new information that may be of great value to everyone in the larger organization is hard to do because we use an antiquated system, then we’re making it hard to share and increasing the potential for duplicate work. If it is not trivially obvious how to create and publish this post, then chances are very high that I’ll put it on the bottom of my queue. And finally, when critical information is spread across various systems, intranet sites, workgroups and peoples inboxes, then it is very hard to learn and grow from that information.  These may sound trivial, but how often do we push things off not because it is intellectually challenging, we may have the answer at our fingertips, but because it is hard to make that information readily available.  If an engaged employee is a productive employee, then what can we do to increase their level of engagement? We can start by looking for opportunities to provide self-documenting self-service solutions. Our newer employees grew up using simplified web interfaces everyday and they loathe calling a help-desk unless it is the last resort. Sadly, many of our enterprise applications have not kept pace and we all still have processes that are based on sending an email -- like discount approvals, vacation requests, or even offer-letter approvals.   My suggestion is to pick one highly visible, high-impact process where employees are either reticent to execute on the process or openly complain about how cumbersome it is and look at the mechanism for that process. If there are better ways, streamlined steps, better UIs that could be done, then you have a candidate to reconfigure that process and make it more engaging. Looking to better engage your employees? Start here!

    Read the article

  • Generate DROP statements for all extended properties

    - by jamiet
    This evening I have been attempting to migrate an existing on-premise database to SQL Azure using the wizard that is built-in to SQL Server Management Studio (SSMS). When I did so I received the following error: The following objects are not supported = [MS_Description] = Extended Property Evidently databases containing extended properties can not be migrated using this particular wizard so I set about removing all of the extended properties – unfortunately there were over a thousand of them so I needed a better way than simply deleting each and every one of them manually. I found a couple of resources online that went some way toward this: Drop all extended properties in a MSSQL database by Angelo Hongens Modifying and deleting extended properties by Adam Aspin Unfortunately neither provided a script that exactly suited my needs. Angelo’s covered extended properties on tables and columns however I had other objects that had extended properties on them. Adam’s looked more complete but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So, both great resources but I wasn’t able to use either on their own to get rid of all of my extended properties. Hence, I combined the excellent work that Angelo and Adam had provided in order to manufacture my own script which did successfully manage to generate calls to sp_dropextendedproperty for all of my extended properties. If you think you might be able to make use of such a script then feel free to download it from https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16707&parid=550F681DAD532637!16706&authkey=!APxPIQCatzC7BQ8. This script will remove extended properties on tables, columns, check constraints, default constraints, views, sprocs, foreign keys, primary keys, table triggers, UDF parameters, sproc parameters, databases, schemas, database files and filegroups. If you have any object types with extended properties on them that are not in that list then consult Adam’s aforementioned article – it should prove very useful. I repeat here the message that I have placed at the top of the script: /* This script will generate calls to sp_dropextendedproperty for every extended property that exists in your database. Actually, a caveat: I don't promise that it will catch each and every extended property that exists, but I'm confident it will catch most of them! It is based on this: http://blog.hongens.nl/2010/02/25/drop-all-extended-properties-in-a-mssql-database/ by Angelo Hongens. Also had lots of help from this: http://www.sqlservercentral.com/articles/Metadata/72609/ by Adam Aspin Adam actually provides a script at that link to do something very similar but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So I put together this version instead. Use at your own risk. Jamie Thomson 2012-03-25 */ Hope this is useful to someone! @Jamiet

    Read the article

  • Is there a good resource for learning Rails in depth? [closed]

    - by Kocheez
    I've been developing rails applications for about 6 months now (I was originally a java developer) and I'm getting familiar enough with building applications that I want to take my rails knowledge to the next level. The majority of books and learning materials I've found deal mostly with "how to use rails" rather than "how it works". I was wondering if there are any good resources for getting a really in depth understanding of the framework, such as how modules and classes are loaded, the underlying architecture, how servers interact, etc... Any tips on learning more would be greatly appreciated

    Read the article

  • BizTalk ESB Toolkit: Core Components and Examples

    - by Rajesh Charagandla
    The BizTalk ESB Toolkit 2.0 provides a stable and powerful platform for services that can change as fast as your business needs. The main purpose of an enterprise service bus (ESB) to is to provide a common mediation layer (the “bus”) through which all services connect. By doing so, not only can many of the problems of point-to-point service connectivity be resolved, but a new level of agile service delivery can be achieved. Author: Jon Flanders This Document can be download from here.

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • .BasePermissions enumerations options in Microsoft.SharePoint.SPRoleDefinition

    - by steve schofield
    I was trying to add a new permission level to Sharepoint 2007 and needed to know all the enumeration options.    Here is the list...  Specify one of the following enumeration values and try again. The possible enumeration values are "EmptyMask, ViewListItems, AddListItems, EditListItems, DeleteListItems, ApproveItems, OpenItems, ViewVersions, DeleteVersions, CancelCheckout, ManagePersonalViews, ManageLists, ViewFormPages, Open, ViewPages, AddAndCustomizePages, ApplyThemeAndBorder, ApplyStyleSheets, ViewUsageData, CreateSSCSite, ManageSubwebs, CreateGroups, ManagePermissions,BrowseDirectories, BrowseUserInfo, AddDelPrivateWebParts, UpdatePersonalWebParts, ManageWeb, UseClientIntegration, UseRemoteAPIs, ManageAlerts, CreateAlerts, EditMyUserInfo, EnumeratePermissions, FullMask"."

    Read the article

  • SQL University: Parallelism Week - Part 3, Settings and Options

    - by Adam Machanic
    Congratulations! You've made it back for the the third and final installment of Parallelism Week here at SQL University . So far we've covered the fundamentals of multitasking vs. parallel processing and delved into how parallel query plans actually work . Today we'll take a look at the settings and options that influence intra-query parallelism and discuss how best to set things up in various situations. Instance-Level Configuration Your database server probably has more than one logical processor....(read more)

    Read the article

  • Difference between Windows and Linux development environments?

    - by Ryan
    I have an interview coming up soon for a Business Analyst position and the recruiter mentioned some feedback from a prior candidate that was interviewed who said the interviewers asked him what the difference between a Windows and Linux development environment was. Are there some high level things I need to be aware of from a business point of view when working with a development team or designing an application on Windows vs Linux?

    Read the article

  • Can LittleBigPlanet2's engine be used for other ?

    - by Bill
    LittleBigPlanet2 just came out. I've worked with the original LBP level editor a bit and really enjoyed it. I've read that LBP2's featureset in the game is much richer; is it possible to use these advanced features to create different sorts of game other than just a regular platformer? I imagine that something along the lines of a Breakout clone would definitely be manageable, but I'm interested in hearing more about the capabilities of the platform.

    Read the article

  • Friday Fun: Hexep

    - by Asian Angel
    This week’s game starts off simple enough, but will quickly challenge your problem solving skills as you work to fill in the hex chains with color on each level. Do you have the patience and skill to succeed at this wicked brain-teaser or will you end up screaming in frustration and defeat? How To Play DVDs on Windows 8 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives?

    Read the article

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • GDD-BR 2010 [2G] So What's A Web App? Introduction to the Chrome Web Store

    GDD-BR 2010 [2G] So What's A Web App? Introduction to the Chrome Web Store Speaker: Eric Bidelman Track: Chrome and HTML5 Time slot: G [16:30 - 17:15] Room: 2 Level: 151 What does it mean to install a web app? This session will give an overview of how to build a beautiful application for the Chrome Web Store, monetize, and distribute it to 70 million users! From: GoogleDevelopers Views: 146 3 ratings Time: 38:57 More in Science & Technology

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Demos

    - by Adam Machanic
    For the second year in a row, I was asked to deliver a 500-level "Query Tuning Mastery" talk in room 6E of the Washington State Convention Center, for the PASS Summit. ( Here's some information about last year's talk, on workspace memory. ) And for the second year in a row, I had to deliver said talk at 10:15 in the morning, in a room used as overflow for the keynote, following a keynote speaker that didn't stop speaking on time. Frustrating! Last Thursday, after very, very quickly setting up and...(read more)

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >