Search Results

Search found 52899 results on 2116 pages for 'ado net'.

Page 131/2116 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008

    - by Darren Cook
    I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object-oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • ASP.NET MVVM Handling multiple Data Transfer Objects on a single page

    - by meffect
    I have an asp.net mvc "edit" page which allows the user to make edits to the parent entity, and then also "create" child entities on the same page. Note: I'm making these data transfer objects up. public class CustomerViewModel { public int Id { get; set; } public Byte[] Timestamp { get; set; } public string CustomerName { get; set; } public etc.. public CustomerOrderCreateViewModel CustomerOrderCreateViewModel { get; set; } } In my view I have two html form's. One for Customer "edit" Http Posts, and the other for CustomerOrder "create" Http Posts. In the view page, I load the CustomerOrder "create" form in using: <div id="CustomerOrderCreate"> @Html.Partial("Vendor/_CustomerOrderCreatePartial", Model.CustomerOrderCreateViewModel) </div> The CustomerOrder html form action posts to a different controller HttpPost ActionResult than the Customer "edit" Action Result. My concern is this, on the CustomerOrder controller, in the HttpPost ActionResult [HttpPost] public ActionResult Create(CustomerOrderCreateViewModel vm) { if (!ModelState.IsValid) { return [What Do I Return Here] } ...[Persist to database code]... } I don't know what to return if the model state isn't valid. Right now it's not a problem, because jquery unobtrusive validation handles validation on the client. But what if I need more complex validation (ie: the server needs to handle the validation).

    Read the article

  • Selectively Including files in C#.net web application [migrated]

    - by segnosaur
    I am attempting to modify an application with the following characteristics: Written in C#.net Using Visual Studio 2010 The application uses a Master sheet to maintain commonality The Master sheet has the following: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="mysheet.master.cs" Inherits="master_mysheet" %> Now, currently, the master sheet has an include file that brings in a common footer: #include file="inc/my-footer.inc" Here's what I want to do: I would like to modify the master sheet to be able to read in a footer based on the value contained in a session variable... i.e. (not real code, but just something to give an idea of what I want) if session("x") = "a" then #include file="inc/my-footer1.inc" else #include file="inc/my-footer2.inc" My first instinct was to go with some vbscript: <script type="text/vbscript" language="vbscript"> document.write("vbscript example.") </script> However, it doesn't run the vbscript code automatically on page load. Does anyone know: - The syntax I need to actually get this to work? i.e. to get the vbscript to run automatically on page load, AND to do the page include? - Or, is there a better way to go about this? (perhaps by doing some coding in C#) Note: I am experienced in C#; however, I haven't done any vbscript since the days of ASP classic, so my knowledge there is out of date.

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Are .NET versions backwards compatible?

    - by Boden
    Over the years various versions of .NET have been deployed to my client machines via WSUS. Now it seems that on many machines these installations have hosed eachother, and certain .NET security updates are failing. I verified that I can run the .NET cleanup tool to get rid of all the .NET installations on a client, and I can then push out .NET 3.5 via WSUS. This seems to have solved the problems I'm having on the machine I tried it on. So the question is: if I've got .NET 3.5, is there any reason to also have previous versions installed?

    Read the article

  • Loading jQuery Consistently in a .NET Web App

    - by Rick Strahl
    One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC. Loading jQuery Properly From CDN Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario. Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage. Here's how I use jQuery CDN plus a fallback link on my WebLog for example: <!DOCTYPE HTML> <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> <script> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript " + "src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E")); </script> <title>Rick Strahl's Web Log</title> ... </head>   You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js". Important: Use the proper Protocol/Scheme for  for CDN Urls [updated based on comments] If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links. BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-) Version Numbers When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache. On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages. Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about. Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from. Loading Scripts via Server Code Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes. In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal… In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template. It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics. Providing Resources in ControlResources In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter). The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading: A Script LoadMode (CDN, Resource, or script url) A default CDN Url A fallback url They are  static properties in the ControlResources class: public class ControlResources { /// <summary> /// Determines what location jQuery is loaded from /// </summary> public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"; /// <summary> /// jQuery UI fallback Url if CDN is unavailable or WebResource is used /// Note: The file needs to exist and hold the minimized version of jQuery ui /// </summary> public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js"; } These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application: protected void Application_Start(object sender, EventArgs e) { // Force jQuery to be loaded off Google Content Network ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; // Allow overriding of the Cdn url ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"; // Route to our own internal handler App.OnApplicationStart(); } With these basic settings in place you can then embed expressions into a page easily. In WebForms use: <!DOCTYPE html> <html> <head runat="server"> <%= ControlResources.jQueryLink() %> <script src="scripts/ww.jquery.min.js"></script> </head> In Razor use: <!DOCTYPE html> <html> <head> @Html.Raw(ControlResources.jQueryLink()) <script src="scripts/ww.jquery.min.js"></script> </head> Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text. Both the WebForms and Razor output produce: <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head> which produces the desired effect for both CDN load and fallback URL. The implementation of jQueryLink is pretty basic of course: /// <summary> /// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings /// of this class. Default load is by CDN plus WebResource fallback /// </summary> /// <param name="url"> /// An optional explicit URL to load jQuery from. Url is resolved. /// When specified no fallback is applied /// </param> /// <returns>full script tag and fallback script for jQuery to load</returns> public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null) { string jQueryUrl = string.Empty; string fallbackScript = string.Empty; if (jQueryLoadMode == JQueryLoadModes.Default) jQueryLoadMode = ControlResources.jQueryLoadMode; if (!string.IsNullOrEmpty(url)) jQueryUrl = WebUtils.ResolveUrl(url); else if (jQueryLoadMode == JQueryLoadModes.WebResource) { Page page = new Page(); jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript, WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; } There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine). You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s). I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately. I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation. FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset. Resources Google CDN for jQuery Full ControlResources Source Code ControlResource Documentation Westwind.Web NuGet This method is part of the Westwind.Web library of the West Wind Web Toolkit or you can grab the Web library from NuGet and add to your Visual Studio project. This package includes a host of Web related utilities and script support features. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Goal =&gt; Microsoft Certification Exams for .NET 4

    - by Raghuraman Kanchi
    Microsoft Learning has announced the availability dates for .NET 4.0 Exams from 2nd July 2010 onwards. Being a MCSD.NET for 2005, My friend and I decided to skip certification exams for 2008 exams aiming towards MCST/MCPD coz we felt it was a mere layer on the top of .NET 2.0. But not so for .NET 4.0. We see .NET 4.0 as a major overhaul with the best of .NET releases in hand. The following exams and their link have been posted below for direct reference. Exam 71-511, TS: Windows Applications Development with Microsoft .NET Framework 4 Exam 71-515, TS: Web Applications Development with Microsoft .NET Framework 4 Exam 71-513: TS: Windows Communication Foundation Development with Microsoft .NET Framework 4 Exam 71-516: TS: Accessing Data with Microsoft .NET Framework 4 Exam 71-518: Pro: Designing and Developing Windows Applications Using Microsoft .NET Framework 4 Exam 71-519: Pro: Designing and Developing Web Applications Using Microsoft .NET Framework 4 I am planning to prepare for each of this exam along with my .NET 4.0 work. Ever since the release of VS 2010 beta 1, I have been working majorly on .NET 4.0 including Silverlight & MEF. Now it is the time to read documentation, prepare notes, understand material, do labs, write programs and application and pass the exam.

    Read the article

  • DataView.RowFilter Vs DataTable.Select() vs DataTable.Rows.Find()

    - by Aseem Gautam
    Considering the code below: Dataview someView = new DataView(sometable) someView.RowFilter = someFilter; if(someView.count > 0) { …. } Quite a number of articles which say Datatable.Select() is better than using DataViews, but these are prior to VS2008. Solved: The Mystery of DataView's Poor Performance with Large Recordsets Array of DataRecord vs. DataView: A Dramatic Difference in Performance Googling on this topic I found some articles/forum topics which mention Datatable.Select() itself is quite buggy(not sure on this) and underperforms in various scenarios. On this(Best Practices ADO.NET) topic on msdn it is suggested that if there is primary key defined on a datatable the findrows() or find() methods should be used insted of Datatable.Select(). This article here (.NET 1.1) benchmarks all the three approaches plus a couple more. But this is for version 1.1 so not sure if these are valid still now. Accroding to this DataRowCollection.Find() outperforms all approaches and Datatable.Select() outperforms DataView.RowFilter. So I am quite confused on what might be the best approach on finding rows in a datatable. Or there is no single good way to do this, multiple solutions exist depending upon the scenario?

    Read the article

  • C# exception when calling stored procedure: ORA-01460 - unimplemented or unreasonable conversion req

    - by Taylor L
    I'm trying to call a stored procedure using ADO .NET and I'm getting the following error: ORA-01460 - unimplemented or unreasonable conversion requested The stored procedure I'm trying to call has the following parameters: param1 IN VARCHAR2, param2 IN NUMBER, param3 IN VARCHAR2, param4 OUT NUMBER, param5 OUT NUMBER, param6 OUT NUMBER, param7 OUT VARCHAR2 Below is the C# code I'm using to call the stored procedure: OracleCommand command = connection.CreateCommand(); command.CommandType = CommandType.StoredProcedure; command.CommandText = "MY_PROC"; OracleParameter param1 = new OracleParameter() { ParameterName = "param1", Direction = ParameterDirection.Input, Value = p1, OracleDbType = OracleDbType.Varchar2, Size = p1.Length }; OracleParameter param2 = new OracleParameter() { ParameterName = "param2", Direction = ParameterDirection.Input, Value = p2, OracleDbType = OracleDbType.Decimal }; OracleParameter param3 = new OracleParameter() { ParameterName = "param3", Direction = ParameterDirection.Input, Value = p3, OracleDbType = OracleDbType.Varchar2, Size = p3.Length }; OracleParameter param4 = new OracleParameter() { ParameterName = "param4", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param5 = new OracleParameter() { ParameterName = "param5", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal}; OracleParameter param6 = new OracleParameter() { ParameterName = "param6", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param7 = new OracleParameter() { ParameterName = "param7", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Varchar2, Size = 32767 }; command.Parameters.Add(param1); command.Parameters.Add(param2); command.Parameters.Add(param3); command.Parameters.Add(param4); command.Parameters.Add(param5); command.Parameters.Add(param6); command.Parameters.Add(param7); command.ExecuteNonQuery(); Any ideas what I'm doing wrong?

    Read the article

  • Tutorials for .NET database app

    - by ChrisC
    My earlier question and comments are at "What is ADO.NET". This shows my level of knowledge about c# database apps. Can someone point me to tutorials and/or primers that can help me proceed from where I am (also stated in other question)? I've looked and all I've found are tutorials that talk about general db basics (ie, not helping with VS/C#), or talk about connecting to existing SQL db's. I need help setting one up and configuring it, as well as help on how to create and use queries to test the db schema.

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Is this OleDbDataAdapter bug

    - by ????
    It doesn't look to me OleDbDataAdapter should throw an exception on trying to fill DataSet for a db table column of type decimal(28,3). The message is "The numerical value is too large to fit into a 96 bit decimal". Could you just check this, I have no significant experience with ADO.NET and OLE DB components? The VB.NET code we have in the application is this: Dim dbDataSet As New DataSet Dim dbDataAdapter As OleDbDataAdapter Dim dbCommand As OleDbCommand Dim conn As OleDbConnection Dim connectionString As String 'parts where connectionString is set conn = New OleDbConnection(connectionString) 'part where sqlQuery is set but it ends up being "SELECT Price As 'Price' From PricebookView" - Price is of type decimal(28,3) dbCommand = New OleDbCommand(sqlQuery, conn) dbCommand.CommandTimeout = cmdTimeout dbDataAdapter = New OleDbDataAdapter(dbCommand) dbDataAdapter.Fill(dbDataSet) The last line is where the exception is thrown and the top of the stack trace is: at System.Data.ProviderBase.DbBuffer.ReadNumeric(Int32 offset) at System.Data.OleDb.ColumnBinding.Value_NUMERIC() at System.Data.OleDb.ColumnBinding.Value() at System.Data.OleDb.OleDbDataReader.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) ... I am not sure why does it try to set the value to Int32. Thank you for the time !

    Read the article

  • Problem using SQLDataReader with Sybase ASE

    - by John K.
    We're developing a reporting application that uses asp.net-mvc (.net 4). We connect through DDTEK.Sybase middleware to a Sybase ASE 12.5 database. We're having a problem pulling data into a datareader (from a stored procedure). The stored procedure computes values (approximately 50 columns) by doing sums, counts, and calling other stored procedures. The problem we're experiencing is... certain (maybe 5% of the columns) come back with NULL or 0. If we debug and copy the SQL statement being used for the datareader and run it inside another SQL tool we get all valid values for all columns. conn = new SybaseConnection { ConnectionString = ConfigurationManager.ConnectionStrings[ConnectStringName].ToString() }; conn.Open(); cmd = new SybaseCommand { CommandTimeout = cmdTimeout, Connection = conn, CommandText = mainSql }; reader = cmd.ExecuteReader(); // AT THIS POINT IMMEDIATELY AFTER THE EXECUTEREADER COMMAND // THE READER CONTAINS THE BAD (NULL OR 0) DATA FOR THESE COLUMNS. DataTable schemaTable = reader.GetSchemaTable(); // AT THIS POINT WE CAN VIEW THE DATATABLE FOR THE SCHEMA AND IT APPEARS CORRECT // THE COLUMNS THAT DON'T WORK HAVE SPECIFICATIONS IDENTICAL TO THE COLUMNS THAT DO WORK Has anyone had problems like this using Sybase and ADO? Thanks, John K.

    Read the article

  • How Can I Find What's Causing My Transaction to Get Promoted?

    - by Damian Powell
    I have web site which serves web services (a mixture of .asmx and WCF) which is mostly using LINQ to SQL and System.Transactions. Occaisionally we see the transaction get promoted to a distributed transaction which causes problems because our web servers are isolated from our databases in such a way that it is not possible for us to use MSDTC. I have configured tracing for System.Transactions by adding the following to my web.config: <system.diagnostics> <sources> <source name="System.Transactions" switchValue="Information"> <listeners> <add name="tx" type="System.Diagnostics.XmlWriterTraceListener" initializeData="tx.log" /> </listeners> </source> </sources> </system.diagnostics> It's very interesting and shows me when the transaction is promoted, but I find that it doesn't really help be discover why. Is there an equivalent tracing mechanism for ADO.NET that will show me when connections are created, including the variables that affect pooling (user, cnn string, transaction scope)?

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Best practices for logging and tracing in .NET

    - by Levidad
    I've been reading a lot about tracing and logging, trying to find some golden rule for best practices in the matter, but there isn't any. People say that good programmers produce good tracing, but put it that way and it has to come from experience. I've also read similar questions in here and through the internet and they are not really the same thing I am asking or do not have a satisfying answer, maybe because the questions lack some detail. So, folks say that tracing should sort of replicate the experience of debugging the application in cases where you can't attach a debugger. It should provide enough context so that you can see which path is taken at each control point in the application. Going deeper, you can even distinguish between tracing and event logging, in that "event logging is different from tracing in that it captures major states rather than detailed flow of control". Now, say I want to do my tracing and logging using only the standard .NET classes, those in the System.Diagnostics namespace. I figured that the TraceSource class is better for the job than the static Trace class, because I want to differentiate among the trace levels and using the TraceSource class I can pass in a parameter informing the event type, while using the Trace class I must use Trace.WriteLineIf and then verify things like SourceSwitch.TraceInformation and SourceSwitch.TraceErrors, and it doesn't even have properties like TraceVerbose or TraceStart. With all that in mind, would you consider a good practice to do as follows: Trace a "Start" event when begining a method, which should represent a single logical operation or a pipeline, along with a string representation of the parameter values passed in to the method. Trace an "Information" event when inserting an item into the database. Trace an "Information" event when taking one path or another in an important if/else statement. Trace a "Critical" or "Error" in a catch block depending on weather this is a recoverable error. Trace a "Stop" event when finishing the execution of the method. And also, please clarify when best to trace Verbose and Warning event types. If you have examples of code with nice trace/logging and are willing to share, that would be excelent. Note: I've found some good information here, but still not what I am looking for: http://msdn.microsoft.com/en-us/magazine/ff714589.aspx Thanks in advance!

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • net.tcp Listener Adapter and net.tcp Port Sharing Service not starting on reboot

    - by Peter K.
    I am using the net.tcp protocol for various web services. When I reboot my Windows 7 Ultimate (64-bit) macbook pro, the service never restarts automatically, even though that is how they are set: The only relevant events I can see are in the System Event Log: Error 6/9/2011 19:47 Service Control Manager 7001 None The Net.Tcp Listener Adapter service depends on the Net.Tcp Port Sharing Service service which failed to start because of the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7000 None The Net.Tcp Port Sharing Service service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7009 None A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect. This post suggests that it's something else blocking the port (in the post it's SCCM 2007 R3 Client which I don't use). What else could be the problem? If it's something else blocking the port, how do I figure out what? When I manually start the services, they start correctly. Dependencies are: Net.Tcp Port Sharing Service Net.Tcp Listener Adapter Still no luck, but I think the problem might be that my network connection takes too long to come up. I put in a custom view of the event log, and found these items: The first in the series says: A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect.

    Read the article

  • Uncovering Compiler Errors in ASP.NET MVC Views

    - by Ben Griswold
    ASPX and ASCX files are compiled on the fly when they are requested on the web server. This means it’s possible that you aren’t catching compile errors associated with your views when you build your ASP.NET MVC project in Visual Studio.  Unless you’re willing to click through your entire application, rendering each view looking for errors, you application is left a little vulnerable to user issues.  Fortunately, there’s a work around.  Open up your MVC project file in notepad or within the Visual Studio IDE by unloading the project and then editing the .csproj file (both actions are available by right-clicking on the Project Node in Solution Explorer.)  Notice the MvcBuildViews option.  It’s probably set to false.  Flip the value to true and you’ll magically start compiling your views when you build your application. <MvcBuildViews>false</MvcBuildViews> Taking this action will slow down your builds a bit, but if you’re a hack like me, it’ll probably save your day in the long run. Now you’re probably thinking, “Neat trick – how’s it work?”  Scroll down toward the bottom of your csproj file and you will notice the AfterBuild target triggers the AspNetCompiler action if the MvcBuildViews option is set to true.  <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'">   <AspNetCompiler VirtualPath="temp"                   PhysicalPath="$(ProjectDir)\..\$(ProjectName)" /> </Target> Great. One more thing. Let’s say you don’t want to slow down all of your builds, but you absolutely want to know if there are any compiler issues with your views before you commit your code to version control or deploy or whatever.  Here’s what you can do – change the AfterBuild condition to run if your configuration is set to Release mode.  <Target Name="AfterBuild" Condition="'$(Configuration)'=='Release'">   <!– Always pre-compile ASPX and ASCX in release mode –>   <AspNetCompiler VirtualPath="temp"                   PhysicalPath="$(ProjectDir)\..\$(ProjectName)" /> </Target> Now your debug mode builds will continue to be as fast as ever and you can quickly validate your views by building in release mode when you so choose.  There’s one little catch – this setup won’t consider the MvcBuildViews option whatsoever! So if you decide to go with this configuration, you might want to add a comment near the MvcBuildViews option letting other developers know they can change the MvcBuildViews option as much as they’d like but it’s not going to affect the AfterBuild action.  Or don’t include the comment and let your team members figure it out for themselves…

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >