Search Results

Search found 11093 results on 444 pages for 'issues'.

Page 434/444 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • (C++) Linking with namespaces causes duplicate symbol error

    - by user577072
    Hello. For the past few days, I have been trying to figure out how to link the files for a CLI gaming project I have been working on. There are two halves of the project, the Client and the Server code. The client needs two libraries I've made. The first is a general purpose game board. This is split between GameEngine.h and GameEngine.cpp. The header file looks something like this namespace gfdGaming { // struct sqr_size { // Index x; // Index y; // }; typedef struct { Index x, y; } sqr_size; const sqr_size sPos = {1, 1}; sqr_size sqr(Index x, Index y); sqr_size ePos; class board { // Prototypes / declarations for the class } } And the CPP file is just giving everything content #include "GameEngine.h" type gfdGaming::board::functions The client also has game-specific code (in this case, TicTacToe) split into declarations and definitions (TTT.h, Client.cpp). TTT.h is basically #include "GameEngine.h" #define TTTtar "localhost" #define TTTport 2886 using namespace gfdGaming; void* turnHandler(void*); namespace nsTicTacToe { GFDCON gfd; const char X = 'X'; const char O = 'O'; string MPhostname, mySID; board TTTboard; bool PlayerIsX = true, isMyTurn; char Player = X, Player2 = O; int recon(string* datHolder = NULL, bool force = false); void initMP(bool create = false, string hn = TTTtar); void init(); bool isTie(); int turnPlayer(Index loc, char lSym = Player); bool checkWin(char sym = Player); int mainloop(); int mainloopMP(); }; // NS I made the decision to put this in a namespace to group it instead of a class because there are some parts that would not work well in OOP, and it's much easier to implement later on. I have had trouble linking the client in the past, but this setup seems to work. My server is also split into two files, Server.h and Server.cpp. Server.h contains exactly: #include "../TicTacToe/TTT.h" // Server needs a full copy of TicTacToe code class TTTserv; struct TTTachievement_requirement { Index id; Index loc; bool inUse; }; struct TTTachievement_t { Index id; bool achieved; bool AND, inSameGame; bool inUse; bool (*lHandler)(TTTserv*); char mustBeSym; int mustBePlayer; string name, description; TTTachievement_requirement steps[safearray(8*8)]; }; class achievement_core_t : public GfdOogleTech { public: // May be shifted to private TTTachievement_t list[safearray(8*8)]; public: achievement_core_t(); int insert(string name, string d, bool samegame, bool lAnd, int lSteps[8*8], int mbP=0, char mbS=0); }; struct TTTplayer_t { Index id; bool inUse; string ip, sessionID; char sym; int desc; TTTachievement_t Ding[8*8]; }; struct TTTgame_t { TTTplayer_t Player[safearray(2)]; TTTplayer_t Spectator; achievement_core_t achievement_core; Index cTurn, players; port_t roomLoc; bool inGame, Xused, Oused, newEvent; }; class TTTserv : public gSserver { TTTgame_t Game; TTTplayer_t *cPlayer; port_t conPort; public: achievement_core_t *achiev; thread threads[8]; int parseit(string tDat, string tsIP); Index conCount; int parseit(string tDat, int tlUser, TTTplayer_t** retval); private: int parseProto(string dat, string sIP); int parseProto(string dat, int lUser); int cycleTurn(); void setup(port_t lPort = 0, bool complete = false); public: int newEvent; TTTserv(port_t tlPort = TTTport, bool tcomplete = true); TTTplayer_t* userDC(Index id, Index force = false); int sendToPlayers(string dat, bool asMSG = false); int mainLoop(volatile bool *play); }; // Other void* userHandler(void*); void* handleUser(void*); And in the CPP file I include Server.h and provide main() and the contents of all functions previously declared. Now to the problem at hand I am having issues when linking my server. More specifically, I get a duplicate symbol error for every variable in nsTicTacToe (and possibly in gfdGaming as well). Since I need the TicTacToe functions, I link Client.cpp ( without main() ) when building the server ld: duplicate symbol nsTicTacToe::PlayerIsX in Client.o and Server.o collect2: ld returned 1 exit status Command /Developer/usr/bin/g++-4.2 failed with exit code 1 It stops once a problem is encountered, but if PlayerIsX is removed / changed temporarily than another variable causes an error Essentially, I am looking for any advice on how to better organize my code to hopefully fix these errors. Disclaimers: -I apologize in advance if I provided too much or too little information, as it is my first time posting -I have tried using static and extern to fix these problems, but apparently those are not what I need Thank you to anyone who takes the time to read all of this and respond =)

    Read the article

  • Issue in configuring JPA with Spring 3 in Jboss 4.2.2 server.

    - by KVMKReddy
    Hi, I am facing issues in configuring JPA with Spring 3 in JBoss 4.2.2 server. Please find the below file of persistence.xml. <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="TestPU"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>java:/TestDS</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.show_sql" value="true"/> </properties> </persistence-unit> </persistence> My spring-beans.xml is as below <bean id="MyAdvise" class=".......Aspect"> <property name="persister"> <bean id="dbPersister" class="..............DataBasePersister"> </bean> </property> </bean> <bean id="localContainerEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true"/> <property name="database" value="ORACLE"/> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</prop> </props> </property> </bean> <bean id="myTxManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="localContainerEntityManagerFactory"/> </bean> <tx:annotation-driven transaction-manager="myTxManager" /> My persister bean is as follows. public class DataBasePersister implements IPersister { private static Logger log = Logger.getLogger(DataBasePersister.class); // The Entity Manager @PersistenceContext protected EntityManager entityManager; @Transactional(readOnly = false) public void persist(Object data) { log.info("IN persist() call. Is the data can castable to MethodStats -->:"+(data instanceof MethodStats)); log.info("Entity Manager instance -->:"+(entityManager)); ---------------------- ---------------------- ---------------------- } } I am getting the following exception when the spring container creating my persister bean org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: JTA EntityManager cannot access a transactions at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:382) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:309) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:166) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy147.getTransaction(Unknown Source) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:335) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:105) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy141.persist(Unknown Source) at com.adp.sbs.aop.aspectj.SBSMethodStatsCollectorAspect.doAround(SBSMethodStatsCollectorAspect.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:65) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:621) at com.adp.sbs.aop.test.TestMethodLevelAnnotationStats$$EnhancerByCGLIB$$efbc78a8.MethodWithOneParamsAndReturnTypeAsString(<generated>) at com.adp.sbs.aop.test.SimpleTestServlet.testMethodAnnotations(SimpleTestServlet.java:46) at com.adp.sbs.aop.test.SimpleTestServlet.doPost(SimpleTestServlet.java:40) at com.adp.sbs.aop.test.SimpleTestServlet.doGet(SimpleTestServlet.java:33) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:179) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.IllegalStateException: JTA EntityManager cannot access a transactions at org.hibernate.ejb.AbstractEntityManagerImpl.getTransaction(AbstractEntityManagerImpl.java:316) at org.springframework.orm.jpa.DefaultJpaDialect.beginTransaction(DefaultJpaDialect.java:70) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:57) at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:332) Can you somebody please suggest me how to resolve this.

    Read the article

  • Delete old map markers and load new ones?

    - by pufAmuf
    I'm trying to delete the old markers and load new ones. Here is the code I have that loads certain markers on page load - no issues here: (function() { var customIcons = { 1: { icon: 'redmarker.png', shadow: 'markershadow.png' }, 2: { icon: 'purplemarker.png', shadow: 'markershadow.png' }, 3: { icon: 'silvermarker.png', shadow: 'markershadow.png' }, 4: { icon: 'goldmarker.png', shadow: 'markershadow.png' } }; window.onload = function(){ var MY_MAPTYPE_ID = 'custom'; var stylez = [ { "stylers": [ { "hue": "#00ccff" }, { "saturation": -100 }, { "lightness": 5 } ] },{ } ]; var latlng = new google.maps.LatLng(10, 10); var options = { zoom: 16, center: latlng, panControl: false, zoomControl: false, scaleControl: true, mapTypeControlOptions: { mapTypeIds: [MY_MAPTYPE_ID,google.maps.MapTypeId.SATELLITE] }, mapTypeId: MY_MAPTYPE_ID }; var map = new google.maps.Map(document.getElementById('map'), options); var styledMapOptions = { name: 'Map' }; var jayzMapType = new google.maps.StyledMapType(stylez, styledMapOptions); map.mapTypes.set(MY_MAPTYPE_ID, jayzMapType); var infoWindow = new google.maps.InfoWindow; // Change this depending on the name of your PHP file downloadUrl("getxml.php", function(data) { var xml = data.responseXML; var markers = xml.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { var name = markers[i].getAttribute("id"); var address = markers[i].getAttribute("id"); var type = markers[i].getAttribute("venue_type"); var point = new google.maps.LatLng( parseFloat(markers[i].getAttribute("lat")), parseFloat(markers[i].getAttribute("lng"))); var html = "<b>" + name + "</b> <br/>" + address; var icon = customIcons[type] || {}; var marker = new google.maps.Marker({ map: map, position: point, icon: icon.icon, shadow: icon.shadow }); bindInfoWindow(marker, map, infoWindow, html); } }); //BUTTON SWITCHING //////////////////////////////////////////////////////////// //BUTTON SWITCHING //////////////////////////////////////////////////////////// //BUTTON SWITCHING //////////////////////////////////////////////////////////// //BUTTON SWITCHING //////////////////////////////////////////////////////////// //BUTTON SWITCHING //////////////////////////////////////////////////////////// //BUTTON SWITCHING //////////////////////////////////////////////////////////// //BUTTON SWITCHING //////////////////////////////////////////////////////////// jQuery(document).delegate(".topCanBeActive", "click", function( e ) { e.preventDefault(); jQuery(".topCanBeActive").removeClass("topActive"); jQuery(this).addClass("topActive"); switch( this.id ){ case 'all_activity_button': alert("search"); break; case 'events_button': downloadUrl("getxml2.php", function(data) { var xml = data.responseXML; var markers = xml.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { var name = markers[i].getAttribute("id"); var address = markers[i].getAttribute("id"); var type = markers[i].getAttribute("venue_type"); var point = new google.maps.LatLng( parseFloat(markers[i].getAttribute("lat")), parseFloat(markers[i].getAttribute("lng"))); var html = "<b>" + name + "</b> <br/>" + address; var icon = customIcons[type] || {}; var marker = new google.maps.Marker({ map: map, position: point, icon: icon.icon, shadow: icon.shadow }); bindInfoWindow(marker, map, infoWindow, html); } }); break; case 'venues_button': alert("venues"); break; case 'search_button': alert("search"); break; } }); //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// //END //////////////////////////////////////////////////////////// } function bindInfoWindow(marker, map, infoWindow, html) { google.maps.event.addListener(marker, 'click', function() { infoWindow.setContent(html); infoWindow.open(map, marker); }); } function downloadUrl(url, callback) { var request = window.ActiveXObject ? new ActiveXObject('Microsoft.XMLHTTP') : new XMLHttpRequest; request.onreadystatechange = function() { if (request.readyState == 4) { request.onreadystatechange = doNothing; callback(request, request.status); } }; request.open('GET', url, true); request.send(null); } function doNothing() {} })(); Now, I created a button section where if you press one button, a different xml file is loaded. Notice the section with the ////////////////////// However, upon clicking the button, nothing happens. The xml file itself is okay and loads the desired data. I also receive no errors in firebug. Any ideas why this happens? Thanks!

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • Jquery mobile page structure

    - by Die 20
    I built a jquery mobile site a while back and I have recently been expanding on it and noticing performance issues. I believe it is because I constructed the site using a multi-page set up where a single php file houses the following pages: **ALL_PAGES.PHP** <html> <head> /* external css and js files */ </head> <body> <div date-role="page" id="main"> <div class="page_link"> page 1 </div> <div class="page_link"> page 2 </div> <div class="page_link"> page 3 </div> </div> <div date-role="page" id="page 1"> <div class="page_link"> main </div> <div class="page_link"> page 2 </div> <div class="page_link"> page 3 </div> </div> <div date-role="page" id="page 2"> <div class="page_link"> main </div> <div class="page_link"> page 1 </div> <div class="page_link"> page 3 </div> </div> <div date-role="page" id="page 3"> <div class="page_link"> main </div> <div class="page_link"> page 1 </div> <div class="page_link"> page 2 </div> </div> </body> </html> **end ALL_PAGES.PHP ** I want to break away from this multi-page setup on one php file, and move to a setup where each page is a separate php file. To accomplish this I took the html from each page and moved it to its own php page. Then I added href links in replace of the mobile.change() functions I used for the "page_link" classes. **MAIN.PHP** <html> <head> external css and js files </head> <body> <div date-role="page" id="page 1"> <a href="/main.php"> main </a> <a href="/page_2.php"> page 2 </a> <a href="/page_3.php"> page 3 </a> </div> </body> </html> **end MAIN.PHP** **PAGE_1.PHP** <div date-role="page" id="page 1"> <a href="/main.php"> main </a> <a href="/page_2.php"> page 2 </a> <a href="/page_3.php"> page 3 </a> </div> **end PAGE_1.PHP** **PAGE_2.PHP** <div date-role="page" id="page 2"> <a href="/main.php"> main </a> <a href="/page_1.php"> page 1 </a> <a href="/page_3.php"> page 3 </a> </div> **end PAGE_2.PHP** **PAGE_3.PHP** <div date-role="page" id="page 3"> <a href="/main.php"> main </a> <a href="/page_1.php"> page 1 </a> <a href="/page_2.php"> page 2 </a> </div> **end PAGE_3.PHP** The site works fine except when the user hits the refresh button in the browser. When that happens each page loses access to any external css and js files located on the main page. I am fairly new to JQM so any advice would be helpful.

    Read the article

  • Access Qry Questions

    - by kralco626
    It was suggested that I repost this questions as I didn't do a very good job discribing my issue the first time. (http://stackoverflow.com/questions/2921286/access-question) THE SITUATION: I have inspections from many months of many years. Sometimes there is more than one inspection in a month, sometimes there is no inspection. However, the report that is desired by the clients requires that I have EXACTLY ONE record per month for the time frame they request the report. They understand the data issues and have stated that if there is more than one inspection in a month to take the latest one. If the is not an inspection for that month, go back in time untill you find one and use that one. So a sample of the data is as follows: (I am including many records because I was told I did not include enough data on my last try) equip_id month year runtime date 1 5 2008 400 5/10/2008 12:34 PM 1 7 2008 500 7/12/2008 1:45 PM 1 8 2008 600 8/20/2008 1:12 PM 1 8 2008 605 8/30/2008 8:00 AM 1 1 2010 2000 1/12/2010 2:00 PM 1 3 2010 2200 3/24/2010 10:00 AM 2 7 2009 1000 7/20/2009 8:00 AM 2 10 2009 1400 10/14/2009 9:00 AM 2 1 2010 1600 1/15/2010 1:00 PM 2 1 2010 1610 1/30/2010 4:00 PM 2 3 2010 1800 3/15/2010 1:00PM After all the transformations to the data are done, it should look like this: equip_id month year runtime date 1 5 2008 400 5/10/2008 12:34 PM 1 6 2008 400 5/10/2008 12:34 PM 1 7 2008 500 7/12/2008 1:45 PM 1 8 2008 605 8/30/2008 8:00 AM 1 9 2008 605 8/30/2008 8:00 AM 1 10 2008 605 8/30/2008 8:00 AM 1 11 2008 605 8/30/2008 8:00 AM 1 12 2008 605 8/30/2008 8:00 AM 1 1 2009 605 8/30/2008 8:00 AM 1 2 2009 605 8/30/2008 8:00 AM 1 3 2009 605 8/30/2008 8:00 AM 1 4 2009 605 8/30/2008 8:00 AM 1 5 2009 605 8/30/2008 8:00 AM 1 6 2009 605 8/30/2008 8:00 AM 1 7 2009 605 8/30/2008 8:00 AM 1 8 2009 605 8/30/2008 8:00 AM 1 9 2009 605 8/30/2008 8:00 AM 1 10 2009 605 8/30/2008 8:00 AM 1 11 2009 605 8/30/2008 8:00 AM 1 12 2009 605 8/30/2008 8:00 AM 1 1 2010 2000 1/12/2010 2:00 PM 1 2 2010 2000 1/12/2010 2:00 PM 1 3 2010 2200 3/24/2010 10:00 AM 2 7 2009 1000 7/20/2009 8:00 AM 2 8 2009 1000 7/20/2009 8:00 AM 2 9 2009 1000 7/20/2009 8:00 AM 2 10 2009 1400 10/14/2009 9:00 AM 2 11 2009 1400 10/14/2009 9:00 AM 2 12 2009 1400 10/14/2009 9:00 AM 2 1 2010 1610 1/30/2010 4:00 PM 2 2 2010 1610 1/30/2010 4:00 PM 2 3 2010 1800 3/15/2010 1:00PM I think that this is the most accurate dipiction of the problem that I can give. I will now say what I have tried. Although if someone else has a better approach, I am perfectly willing to throw away what I have done and do it differently... STEP 1: create a query that removes the duplicates from the data. Ie. only one record per equip_id for each month/year, keeping the latest one. (done successfully) STEP 2: create a table of the date ranges the client wants the report for. (This is done dynamically at runtime) This table two field, Month and Year. So if the client wants a report from FEb 2008 to March 2010 the table would look like: Month Year 2 2008 3 2008 . . . 12 2008 1 2009 . . . 12 2009 1 2010 2 2010 3 2010 I then left joined this table with my query from step 1. So now I have a record for every month and every year that they want the report for, with nulls(or blanks) or sometimes 0s (not sure why, access is weird, but sometiems they are nulls and sumtimes they are 0s...) for the runtimes that are not avaiable. I don't particurally like this solution, but ill do it if i have to. (this is also done successfully) STEP 3: Fill in the missing runtime values. This I HAVE NOT done successfully. Note that if the request range for the report is feb 2008 to march 2010 and the oldest record for a particular equip_id is say june 2008, it is O.K. for the runtimes to be null (or zeros) for feb - may 2008. I am working with the following query for this step: SELECT equip_id as e_id,year,month, (select top 1 runhours from qry_1_c_One_Record_per_Month a where a.equip_id = e_id order by year,month) FROM qry_1_c_One_Record_per_Month where runhours is null or runhours = 0; UNION SELECT equip_id, year, month, runhours FROM qry_1_c_One_Record_per_Month WHERE .runhours Is Not Null And runhours <> 0 However I clearly can't check the a.equip_id = e_id ... so i don't have anyway to make sure i'm looking at the correct equip_id SUMMARY: So like i said i'm willing to throw away any part, or all of what I tried. Just trying to give everyone a complete picture. I REALLY apreciate ANY help! Thanks so much in advance!

    Read the article

  • ruby on rails has_many through relationship

    - by BennyB
    Hi i'm having a little trouble with a has_many through relationship for my app and was hoping to find some help. So i've got Users & Lectures. Lectures are created by one user but then other users can then "join" the Lectures that have been created. Users have their own profile feed of the Lectures they have created & also have a feed of Lectures friends have created. This question however is not about creating a lecture but rather "Joining" a lecture that has been created already. I've created a "lecturerelationships" model & controller to handle this relationship between Lectures & the Users who have Joined (which i call "actives"). Users also then MUST "Exit" the Lecture (either by clicking "Exit" or navigating to one of the header navigation links). I'm grateful if anyone can work through some of this with me... I've got: Users.rb model Lectures.rb model Users_controller Lectures_controller then the following model lecturerelationship.rb class lecturerelationship < ActiveRecord::Base attr_accessible :active_id, :joinedlecture_id belongs_to :active, :class_name => "User" belongs_to :joinedlecture, :class_name => "Lecture" validates :active_id, :presence => true validates :joinedlecture_id, :presence => true end lecturerelationships_controller.rb class LecturerelationshipsController < ApplicationController before_filter :signed_in_user def create @lecture = Lecture.find(params[:lecturerelationship][:joinedlecture_id]) current_user.join!(@lecture) redirect_to @lecture end def destroy @lecture = Lecturerelationship.find(params[:id]).joinedlecture current_user.exit!(@user) redirect_to @user end end Lectures that have been created (by friends) show up on a users feed in the following file _activity_item.html.erb <li id="<%= activity_item.id %>"> <%= link_to gravatar_for(activity_item.user, :size => 200), activity_item.user %><br clear="all"> <%= render :partial => 'shared/join', :locals => {:activity_item => activity_item} %> <span class="title"><%= link_to activity_item.title, lecture_url(activity_item) %></span><br clear="all"> <span class="user"> Joined by <%= link_to activity_item.user.name, activity_item.user %> </span><br clear="all"> <span class="timestamp"> <%= time_ago_in_words(activity_item.created_at) %> ago. </span> <% if current_user?(activity_item.user) %> <%= link_to "delete", activity_item, :method => :delete, :confirm => "Are you sure?", :title => activity_item.content %> <% end %> </li> Then you see I link to the the 'shared/join' partial above which can be seen in the file below _join.html.erb <%= form_for(current_user.lecturerelationships.build(:joinedlecture_id => activity_item.id)) do |f| %> <div> <%= f.hidden_field :joinedlecture_id %> </div> <%= f.submit "Join", :class => "btn btn-large btn-info" %> <% end %> Some more files that might be needed: config/routes.rb SampleApp::Application.routes.draw do resources :users do member do get :following, :followers, :joined_lectures end end resources :sessions, :only => [:new, :create, :destroy] resources :lectures, :only => [:create, :destroy, :show] resources :relationships, :only => [:create, :destroy] #for users following each other resources :lecturerelationships, :only => [:create, :destroy] #users joining existing lectures So what happens is the lecture comes in my activity_feed with a Join button option at the bottom...which should create a lecturerelationship of an "active" & "joinedlecture" (which obviously are supposed to be coming from the user & lecture classes. But the error i get when i click the join button is as follows: ActiveRecord::StatementInvalid in LecturerelationshipsController#create SQLite3::ConstraintException: constraint failed: INSERT INTO "lecturerelationships" ("active_id", "created_at", "joinedlecture_id", "updated_at") VALUES (?, ?, ?, ?) Also i've included my user model (seems the error is referring to it) user.rb class User < ActiveRecord::Base attr_accessible :email, :name, :password, :password_confirmation has_secure_password has_many :lectures, :dependent => :destroy has_many :lecturerelationships, :foreign_key => "active_id", :dependent => :destroy has_many :joined_lectures, :through => :lecturerelationships, :source => :joinedlecture before_save { |user| user.email = email.downcase } before_save :create_remember_token validates :name, :presence => true, :length => { :maximum => 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i validates :email, :presence => true, :format => { :with => VALID_EMAIL_REGEX }, :uniqueness => { :case_sensitive => false } validates :password, :presence => true, :length => { :minimum => 6 } validates :password_confirmation, :presence => true def activity # This feed is for "My Activity" - basically lectures i've started Lecture.where("user_id = ?", id) end def friendactivity Lecture.from_users_followed_by(self) end # lECTURE TO USER (JOINING) RELATIONSHIPS def joined?(selected_lecture) lecturerelationships.find_by_joinedlecture_id(selected_lecture.id) end def join!(selected_lecture) lecturerelationships.create!(:joinedlecture_id => selected_lecture.id) end def exit!(selected_lecture) lecturerelationships.find_by_joinedlecture_id(selected_lecture.id).destroy end end Thanks for any and all help - i'll be on here for a while so as mentioned i'd GREATLY appreciate someone who may have the time to work through my issues with me...

    Read the article

  • Django + dbxml + Apache = problems. Any solutions?

    - by Jason
    I'm trying to set up a Django application using WSGI. That works fine. However, I am having some issues with part of my Django app that uses BDB XML. My Apache config is as follows: Listen 8000 WSGISocketPrefix /tmp/wsgi <VirtualHost *:8000> ServerName <server name> DocumentRoot <path to doc root> LogLevel info WSGIScriptAlias / <path to wsgi> WSGIApplicationGroup %{GLOBAL} WSGIDaemonProcess debug threads=1 WSGIProcessGroup debug </VirtualHost> However, I'm still getting the following error: DB_ENV->repmgr_stat interface requires an environment configured for the replication subsystem [error] child died with signal 11 My environment is opened as: environment = DBEnv() environment.open( <absolute db env path>, DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL, 0 ) I am using: python 2.6.2 apache 2.2 ubuntu 9.04 dbxml 2.5.13 compiled from source (so libdb-4.8, bsddb3, all that jazz) I see Apache seems to link to libdb-4.6. Is this a problem? ldd /usr/sbin/apache2 | grep libdb libdb-4.6.so => /usr/lib/libdb-4.6.so (0xb7c01000) Updated Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb5a48b90 (LWP 12700)] 0x00000000 in ?? () (gdb) thread apply all bt Thread 4 (Thread 0xb6a67b90 (LWP 12698)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7de07b1 in select () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ea5bcf in apr_sleep () from /usr/lib/libapr-1.so.0 #3 0xb6d7afee in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #5 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #6 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 3 (Thread 0xb6249b90 (LWP 12699)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7de07b1 in select () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ea5bcf in apr_sleep () from /usr/lib/libapr-1.so.0 #3 0xb6d7ab39 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #5 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #6 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 2 (Thread 0xb5a48b90 (LWP 12700)): #0 0x00000000 in ?? () #1 0xb4f03b5e in DbXml::XmlManager::XmlManager () from /home/jason/dbxml-2.5.13/install/lib/libdbxml-2.5.so #2 0xb501b29b in _wrap_new_XmlManager (self=0x0, args=0xac66fcc) at dbxml_python_wrap.cpp:5183 #3 0xb6b77aed in PyCFunction_Call () from /usr/lib/libpython2.6.so.1.0 #4 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #5 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #6 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #7 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #8 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #9 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #10 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #11 0xb6b9ae03 in ?? () from /usr/lib/libpython2.6.so.1.0 #12 0xb6b90f55 in ?? () from /usr/lib/libpython2.6.so.1.0 #13 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #14 0xb6bd7618 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #15 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #16 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #17 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #18 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #19 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #20 0xb6bd3a34 in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.6.so.1.0 #21 0xb6b44a7d in PyInstance_New () from /usr/lib/libpython2.6.so.1.0 #22 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #23 0xb6bd7618 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #24 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #25 0xb6b61969 in ?? () from /usr/lib/libpython2.6.so.1.0 #26 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #27 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #28 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #29 0xb6b61969 in ?? () from /usr/lib/libpython2.6.so.1.0 #30 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #31 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #32 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #33 0xb6b9b483 in ?? () from /usr/lib/libpython2.6.so.1.0 #34 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #35 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #36 0xb6bdab4f in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #37 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #38 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #39 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #40 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #41 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #42 0xb6b9b483 in ?? () from /usr/lib/libpython2.6.so.1.0 #43 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #44 0xb6bd3a34 in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.6.so.1.0 #45 0xb6d7172d in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #46 0xb6d7539f in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #47 0xb6d7e1d8 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #48 0xb6d7a42c in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #49 0xb6d7a8bd in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #50 0xb6d7a9c5 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #51 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #52 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #53 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 1 (Thread 0xb7460b00 (LWP 12697)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7e75300 in sigwait () from /lib/tls/i686/cmov/libpthread.so.0 #2 0xb7ea3f3b in apr_signal_thread () from /usr/lib/libapr-1.so.0 #3 0xb6d7b48d in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb6d7bc98 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #5 0xb6d79632 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #6 0xb7e9a2c9 in apr_proc_other_child_alert () from /usr/lib/libapr-1.so.0 #7 0x08092202 in ap_mpm_run () #8 0x080673c8 in main () #0 0x00000000 in ?? ()

    Read the article

  • How do you overide a class that is called by another class with parent::method

    - by dan.codes
    I am trying to extend Mage_Catalog_Block_Product_View I have it setup in my local directory as its own module and everything works fine, I wasn't getting the results that I wanted. I then saw that another class extended that class as well. The method I am trying to override is the protected function _prepareLayout() This is the function class Mage_Review_Block_Product_View extends Mage_Catalog_Block_Product_View protected function _prepareLayout() { $this-&gt;getLayout()-&gt;createBlock('catalog/breadcrumbs'); $headBlock = $this-&gt;getLayout()-&gt;getBlock('head'); if ($headBlock) { $title = $this-&gt;getProduct()-&gt;getMetaTitle(); if ($title) { $headBlock-&gt;setTitle($title); } $keyword = $this-&gt;getProduct()-&gt;getMetaKeyword(); $currentCategory = Mage::registry('current_category'); if ($keyword) { $headBlock-&gt;setKeywords($keyword); } elseif($currentCategory) { $headBlock-&gt;setKeywords($this-&gt;getProduct()-&gt;getName()); } $description = $this-&gt;getProduct()-&gt;getMetaDescription(); if ($description) { $headBlock-&gt;setDescription( ($description) ); } else { $headBlock-&gt;setDescription( $this-&gt;getProduct()-&gt;getDescription() ); } } return parent::_prepareLayout(); } I am trying to modify it just a bit with the following, keep in mind I know there is a title prefix and suffix but I needed it only for the product page and also I needed to add text to the description. class MyCompany_Catalog_Block_Product_View extends Mage_Catalog_Block_Product_View protected function _prepareLayout() { $storeId = Mage::app()-&gt;getStore()-&gt;getId(); $this-&gt;getLayout()-&gt;createBlock('catalog/breadcrumbs'); $headBlock = $this-&gt;getLayout()-&gt;getBlock('head'); if ($headBlock) { $title = $this-&gt;getProduct()-&gt;getMetaTitle(); if ($title) { if($storeId == 2){ $title = "Pool Supplies Fast - " .$title; $headBlock-&gt;setTitle($title); } $headBlock-&gt;setTitle($title); }else{ $path = Mage::helper('catalog')-&gt;getBreadcrumbPath(); foreach ($path as $name =&gt; $breadcrumb) { $title[] = $breadcrumb['label']; } $newTitle = "Pool Supplies Fast - " . join($this-&gt;getTitleSeparator(), array_reverse($title)); $headBlock-&gt;setTitle($newTitle); } $keyword = $this-&gt;getProduct()-&gt;getMetaKeyword(); $currentCategory = Mage::registry('current_category'); if ($keyword) { $headBlock-&gt;setKeywords($keyword); } elseif($currentCategory) { $headBlock-&gt;setKeywords($this-&gt;getProduct()-&gt;getName()); } $description = $this-&gt;getProduct()-&gt;getMetaDescription(); if ($description) { if($storeId == 2){ $description = "Pool Supplies Fast - ". $this-&gt;getProduct()-&gt;getName() . " - " . $description; $headBlock-&gt;setDescription( ($description) ); }else{ $headBlock-&gt;setDescription( ($description) ); } } else { if($storeId == 2){ $description = "Pool Supplies Fast: ". $this-&gt;getProduct()-&gt;getName() . " - " . $this-&gt;getProduct()-&gt;getDescription(); $headBlock-&gt;setDescription( ($description) ); }else{ $headBlock-&gt;setDescription( $this-&gt;getProduct()-&gt;getDescription() ); } } } return Mage_Catalog_Block_Product_Abstract::_prepareLayout(); } This executs fine but then I notice that the following class Mage_Review_Block_Product_View_List extends which extends Mage_Review_Block_Product_View and that extends Mage_Catalog_Block_Product_View as well. Inside this class they call the _prepareLayout as well and call the parent with parent::_prepareLayout() class Mage_Review_Block_Product_View_List extends Mage_Review_Block_Product_View protected function _prepareLayout() { parent::_prepareLayout(); if ($toolbar = $this-&gt;getLayout()-&gt;getBlock('product_review_list.toolbar')) { $toolbar-&gt;setCollection($this-&gt;getReviewsCollection()); $this-&gt;setChild('toolbar', $toolbar); } return $this; } So obviously this just calls the same class I am extending and runs the function I am overiding but it doesn't get to my class because it is not in my class hierarchy and since it gets called after my class all the stuff in the parent class override what I have set. I'm not sure about the best way to extend this type of class and method, there has to be a good way to do this, I keep finding I am running into issues when trying to overide these prepare methods that are derived from the abstract classes, there seems to be so many classes overriding them and calling parent::method. What is the best way to override these functions?

    Read the article

  • Writing an optimised and efficient search engine with mySQL and ColdFusion

    - by Mel
    I have a search page with the following scenarios listed below. I was told there was a better way to do it, but not how, and that I am using too many if statements, and that it's prone to causing an error through url manipulation: Search.cfm will processes a search made from a search bar present on all pages, with one search input (titleName). If search.cfm is accessed manually (through URL not through using the simple search bar on all pages) it displays an advanced search form with three inputs (titleName, genreID, platformID) or it evaluates searchResponse variable and decides what to do. If simple search query is blank, has no results, or less than 3 characters it displays an error If advanced search query is blank, has no results, or less than 3 characters it displays an error If any successful search returns results, they come back normally. The top-of-page logic is as follows: <!---SET DEFAULT VARIABLE---> <cfparam name="variables.searchResponse" default=""> <!---CHECK TO SEE IF SIMPLE SEARCH A FORM WAS SUBMITTED AND EXECUTE SEARCH IF IT WAS---> <cfif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) GTE 3> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="simpleSearch" searchString="#Form.titleName#" returnvariable="simpleSearchResult"> <cfset variables.searchResponse = "simpleSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.simpleSearchResult") AND simpleSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> <!---CHECK IF ADVANCED SEARCH FORM WAS SUBMITTED---> <cfif IsDefined("Form.AdvancedSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.advancedSearch") AND Len(Trim(Form.titleName)) GTE 2> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="advancedSearch" returnvariable="advancedSearchResult" titleName="#Form.titleName#" genreID="#Form.genreID#" platformID="#Form.platformID#"> <cfset variables.searchResponse = "advancedSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.advancedSearchResult") AND advancedSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> I'm using the searchResponse variable to decide what the the page displays, based on the following scenarios: <!---ALWAYS DISPLAY SIMPLE SEARCH BAR AS IT'S PART OF THE HEADER---> <form name="simpleSearch" action="search.cfm" method="post"> <input type="hidden" name="simpleSearch" /> <input type="text" name="titleName" /> <input type="button" value="Search" onclick="form.submit()" /> </form> <!---IF NO SEARCH WAS SUBMITTED DISPLAY DEFAULT FORM---> <cfif searchResponse IS ""> <h1>Advanced Search</h1> <!---DISPLAY FORM---> <form name="advancedSearch" action="search.cfm" method="post"> <input type="hidden" name="advancedSearch" /> <input type="text" name="titleName" /> <input type="text" name="genreID" /> <input type="text" name="platformID" /> <input type="button" value="Search" onclick="form.submit()" /> </form> </cfif> <!---IF SEARCH IS BLANK OR LESS THAN 3 CHARACTERS DISPLAY ERROR MESSAGE---> <cfif searchResponse IS "invalidString"> <cfoutput> <h1>INVALID SEARCH</h1> </cfoutput> </cfif> <!---IF SEARCH WAS MADE BUT NO RESULTS WERE FOUND---> <cfif searchResponse IS "noResult"> <cfoutput> <h1>NO RESULT FOUND</h1> </cfoutput> </cfif> <!---IF SIMPLE SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "simpleSearchResult"> <cfoutput> <h1>Search Results</h1> </cfoutput> <cfoutput query="simpleSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> <!---IF ADVANCED SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "advancedSearchResult"> <cfoutput> <h1>Search Results</h1> <p>Your search for "#Form.titleName#" returned #advancedSearchResult.RecordCount# result(s).</p> </cfoutput> <cfoutput query="advancedSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> Is my logic a) not efficient because my if statements/is there a better way to do this? And b) Can you see any scenarios where my code can break? I've tested it but I have not been able to find any issues with it. And I have no way of measuring performance. Any thoughts and ideas would be greatly appreciated. Many thanks

    Read the article

  • Specifying $(this).val() for two different elements (by class)

    - by Chaya Cooper
    What would be the correct syntax for specifying 2 elements by class to evaluate their values when they have different classes? This snippet adds and/or removes values from "increase_priority1" when the user has selected both a complaint and ranked the importance level of the issue, however when I added the 2nd element (".complaint select") to the If Statement it stopped working properly and only accepts input from the 2nd item (Waist) after the 1rst item (Shoulders) has been selected. I believe the problem is that it's unable to determine which ".complaint select" to use, but I haven't been able to figure out how to use (this) for the 2nd element, or another similar approach. The complete function includes apx. 20 similar pairs of If Statements to set & unset other elements, and a working example with 4 pairs is available at http://jsfiddle.net/chayacooper/vWLEn/168/ I'm trying to avoid using ID's because that becomes pretty cumbersome to do for 25 Select elements and each of their values (ID's were also problematic because it stopped working when I added a 2nd element to the If Statement for removing/unsetting a value) Javascript var $increase_priority1 = $(".increase_priority1"); // If value is "Shoulders", complaint was Shoulders too small (select name=Shoulders value=Too_small) and it was assigned a level 1 priority (select class="ranking shoulders" value=1). // var's for other issues and priority levels go here $(".ranking, .complaint select").change(function() { var name = $(this).data("name"); //get priority name if ($(".complaint select").val() === "Too_small" && $(this).val() == 1 && !$(this).data("increase_priority1")) { //rank is 1, and not yet added to priority list $("<option>", {text:name, val:name}).appendTo($increase_priority1); $(this).data("increase_priority1", true); //flag as a priority item } if ($(this).data("increase_priority1") && ($(this).val() != 1 || $(".complaint select").val() != "Too_small")) { //is in priority list, but now demoted $("option[value=" + name + "]", $increase_priority1).remove(); $(this).removeData("increase_priority1"); //no longer a priority item } // Similar If Statements to set & unset other elements go here }); HTML <div> <label>To Increase - 1rst Priority <select name="modify_increase_1rst[]" class="increase_priority1" multiple> </select></label> </div> <div> <span class="complaint"> <label>Shoulders</label> <select name=Shoulders> <option value="null">Select</option><option value="Too_small">Too Small</option><option value="Too_big">Too Big</option><option value="Too_expensive">Too expensive</option> </select> </span> <label>Ranking</label> <select name="importance_bother_shoulders" class="ranking shoulders" data-name="Shoulders"> <option value="Null">Select</option><option value="1">1</option><option value="2">2</option><option value="3">3</option><option value="4">4</option> <option value="5">5</option> </select> </div> <div> <span class="complaint"> <label>Waist</label> <select name=Waist> <option value="null">Select</option><option value="Too_small">Too Small</option><option value="Too_big">Too Big</option><option value="Too_expensive">Too expensive</option> </select> </span> <label>Ranking</label> <select name="importance_bother_waist" class="ranking waist" data-name="Waist"> <option value="Null">Select</option><option value="1">1</option><option value="2">2</option><option value="3">3</option><option value="4">4</option> <option value="5">5</option> </select> </div>

    Read the article

  • z index background issue in IE

    - by Michael
    I have a jQuery tools scroller set up with controls managing two separate divs of info - one images, the other related text that needs to sit over the top of the images with a transparent bg image. I am using z-indexing to achieve this and am aware of IE's issues with this but am unable to sort it (tested in IE6-8). Image of the issue below: http://test.shakingpaper.com.au/not_working.png It seems that the overlayed div is taking on the containers white. Try as I might, I can't resolve this. HTML/CSS code below: <div id="content"> <div id="nav"></div> <div class="s4 slideshow"> <div> <img src="<?php bloginfo('stylesheet_directory'); ?>/images/hero_1_white.jpg" width="770" height="367" /> </div> <div> <img src="<?php bloginfo('stylesheet_directory'); ?>/images/hero_1_white.jpg" width="770" height="367" /> </div> <div> <img src="<?php bloginfo('stylesheet_directory'); ?>/images/hero_1_white.jpg" width="770" height="367" /> </div> </div> <div id="overlay_bg"></div> <div class="s4 information"> <div> <h1>Support</h1> <p>Quisque lacus quam, egestas ac tincidunt a, lacinia vel velit. Aenean facilisis nulla vitae.</p> <p><a href="#">Support Us</a></p> </div> <div> <h1>Events</h1> <p>Quisque lacegestas ac tincidunt a, lacinia vel velit. Aenean facilisis nulla vitae.</p> <p><a href="#">Read More</a></p> </div> <div> <h1>Regional</h1> <p>Quisque lacus quam, egestas ac tincidunt a, lacinia vel velit. Aenean facilisis nulla vitae.</p> <p><a href="#">Support Us</a></p> </div> </div> </div> <!-- end of content --> #content { height: auto; min-height: 300px !important; overflow: hidden; position:relative; margin-left: 27px; width: 770px; padding-bottom: 43px; } #nav { width: 60px; z-index: 10000; position: absolute; top:340px; left: 28px; } .s4 { width: 770px; height: 370px; overflow: hidden; } #nav a { background-color: transparent; background-image: url(images/transition.png); background-position: 0 0; text-indent: -1000em; width: 10px; height: 10px; display: block; float: left; margin-right: 5px; } #nav a.activeSlide { background-position: 0 -10px; } #overlay_bg { background: url(images/soild_block.png) no-repeat; width: 318px; height: 339px; z-index: 5000; position: absolute; top: 28px; } .information { position: absolute; top: 60px; left: 28px; z-index: 16000; width: 290px; height: 260px; color: #FFF; } .information h1 { font-size: 50px; font-style: italic; text-transform: uppercase; } .information p { font-size: 17px; line-height: 27px; margin-top: 37px; } .information a { font-size: 13px; padding-bottom: 2px; border-bottom: 1px solid; color: #FFF; text-transform: uppercase; font-style: italic; } .information a:hover { color: #000; } Any help would be greatly appreciated.

    Read the article

  • How to optimize method's that track metrics in 3rd party application?

    - by WulfgarPro
    Hi, I have two listboxes that keep updated lists of various objects roaming in a 3rd party application. When a user selects an object from a listbox, an event handler is fired, calling a method that gathers various metrics belonging to that object from the 3rd party application for displaying in a set of textboxes. This is slow! I am not sure how to optimize this functionality to facilitate greater speeds.. private void lsbUavs_SelectedIndexChanged(object sender, EventArgs e) { if (_ourSelectedUavFromListBox != null) { UtilStkScenario.ChangeUavColourOnScenario(_ourSelectedUavFromListBox.UavName, false); } if (lsbUavs.SelectedItem != null) { Uav ourUav = UtilStkScenario.FindUavFromScenarioBasedOnName(lsbUavs.SelectedItem.ToString()); hsbThrottle.Value = (int)ourUav.ThrottleValue; UtilStkScenario.ChangeUavColourOnScenario(ourUav.UavName, true); _ourSelectedUavFromListBox = ourUav; // we don't want this thread spawning many times if (tUpdateMetricInformationInTabControl != null) { if (tUpdateMetricInformationInTabControl.IsAlive) { tUpdateMetricInformationInTabControl.Abort(); } } tUpdateMetricInformationInTabControl = new Thread(UpdateMetricInformationInTabControl); tUpdateMetricInformationInTabControl.Name = "UpdateMetricInformationInTabControlUavs"; tUpdateMetricInformationInTabControl.IsBackground = true; tUpdateMetricInformationInTabControl.Start(lsbUavs); } } delegate string GetNameOfListItem(ListBox listboxId); delegate void SetTextBoxValue(TextBox textBoxId, string valueToSet); private void UpdateMetricInformationInTabControl(object listBoxToUpdate) { ListBox theListBoxToUpdate = (ListBox)listBoxToUpdate; GetNameOfListItem dGetNameOfListItem = new GetNameOfListItem(GetNameOfSelectedListItem); SetTextBoxValue dSetTextBoxValue = new SetTextBoxValue(SetNamedTextBoxValue); try { foreach (KeyValuePair<string, IAgStkObject> entity in UtilStkScenario._totalListOfAllStkObjects) { if (entity.Key.ToString() == (string)theListBoxToUpdate.Invoke(dGetNameOfListItem, theListBoxToUpdate)) { while ((string)theListBoxToUpdate.Invoke(dGetNameOfListItem, theListBoxToUpdate) == entity.Key.ToString()) { if (theListBoxToUpdate.Name == "lsbEntities") { double[] latLonAndAltOfEntity = UtilStkScenario.FindMetricsOfStkObjectOnScenario(UtilStkScenario._stkObjectRoot.CurrentTime, entity.Value); SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "Entity", entity.Key, "", "", "", "", latLonAndAltOfEntity[4].ToString(), latLonAndAltOfEntity[3].ToString()); } else if (theListBoxToUpdate.Name == "lsbUavs") { double[] latLonAndAltOfEntity = UtilStkScenario.FindMetricsOfStkObjectOnScenario(UtilStkScenario._stkObjectRoot.CurrentTime, entity.Value); SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "UAV", entity.Key, entity.Value.ClassName.ToString(), latLonAndAltOfEntity[0].ToString(), latLonAndAltOfEntity[1].ToString(), latLonAndAltOfEntity[2].ToString(), latLonAndAltOfEntity[4].ToString(), latLonAndAltOfEntity[3].ToString()); } } } } } catch (Exception e) { // selected entity was deleted(end-of-life) in STK - remove LLA information from GUI if (theListBoxToUpdate.Name == "lsbEntities") { SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "Entity", "", "", "", "", "", "", ""); UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "UpdateMetricInformationInTabControl", UtilLog.logWriter); } else if (theListBoxToUpdate.Name == "lsbUavs") { SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "UAV", "", "", "", "", "", "", ""); UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "UpdateMetricInformationInTabControl", UtilLog.logWriter); } } } internal static double[] FindMetricsOfStkObjectOnScenario(object timeToFindMetricState, IAgStkObject stkObject) { double[] stkObjectMetrics = null; try { stkObjectMetrics = new double[5]; object latOfStkObject, lonOfStkObject; double altOfStkObject, headingOfStkObject, velocityOfStkObject; IAgProvideSpatialInfo spatial = stkObject as IAgProvideSpatialInfo; IAgVeSpatialInfo spatialInfo = spatial.GetSpatialInfo(false); IAgSpatialState spatialState = spatialInfo.GetState(timeToFindMetricState); spatialState.FixedPosition.QueryPlanetodetic(out latOfStkObject, out lonOfStkObject, out altOfStkObject); double[] stkObjectheadingAndVelocity = FindHeadingAndVelocityOfStkObjectFromScenario(stkObject.InstanceName); headingOfStkObject = stkObjectheadingAndVelocity[0]; velocityOfStkObject = stkObjectheadingAndVelocity[1]; stkObjectMetrics[0] = (double)latOfStkObject; stkObjectMetrics[1] = (double)lonOfStkObject; stkObjectMetrics[2] = altOfStkObject; stkObjectMetrics[3] = headingOfStkObject; stkObjectMetrics[4] = velocityOfStkObject; } catch (Exception e) { UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "FindMetricsOfStkObjectOnScenario", UtilLog.logWriter); } return stkObjectMetrics; } private static double[] FindHeadingAndVelocityOfStkObjectFromScenario(string stkObjectName) { double[] stkObjectHeadingAndVelocity = new double[2]; IAgStkObject stkUavObject = null; try { string typeOfObject = CheckIfStkObjectIsEntityOrUav(stkObjectName); if (typeOfObject == "UAV") { stkUavObject = _stkObjectRootToIsolateForUavs.CurrentScenario.Children[stkObjectName]; IAgDataProviderGroup group = (IAgDataProviderGroup)stkUavObject.DataProviders["Heading"]; IAgDataProvider provider = (IAgDataProvider)group.Group["Fixed"]; IAgDrResult result = ((IAgDataPrvTimeVar)provider).ExecSingle(_stkObjectRootToIsolateForUavs.CurrentTime); stkObjectHeadingAndVelocity[0] = (double)result.DataSets[1].GetValues().GetValue(0); stkObjectHeadingAndVelocity[1] = (double)result.DataSets[4].GetValues().GetValue(0); } else if (typeOfObject == "Entity") { IAgStkObject stkEntityObject = _stkObjectRootToIsolateForEntities.CurrentScenario.Children[stkObjectName]; IAgDataProviderGroup group = (IAgDataProviderGroup)stkEntityObject.DataProviders["Heading"]; IAgDataProvider provider = (IAgDataProvider)group.Group["Fixed"]; IAgDrResult result = ((IAgDataPrvTimeVar)provider).ExecSingle(_stkObjectRootToIsolateForEntities.CurrentTime); stkObjectHeadingAndVelocity[0] = (double)result.DataSets[1].GetValues().GetValue(0); stkObjectHeadingAndVelocity[1] = (double)result.DataSets[4].GetValues().GetValue(0); } } catch (Exception e) { UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "FindHeadingAndVelocityOfStkObjectFromScenario", UtilLog.logWriter); } return stkObjectHeadingAndVelocity; } Any help would be really appreciated. From my knowledge, I cant really see any issues with the C#. Maybe it has to do with the methodology I'm using.. maybe some kind of caching mechanism is required - this is not natively available. WulfgarPro

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • Cannot update, apt-get cannot fetch index files

    - by Evan
    I have a fresh install of Ubuntu 11.10 from the iso 'ubuntu-11.10-desktop-amd64.iso'. I installed this in VMWare Fusion 4.1.1 running on OSX 10.7.3. When setting up the VM, I allowed easy install to take care of creating my user and installing VMWare tools. No problems during installation, everything seems to be working great. The problem is that apt-get will NOT update, so I can't do software updates or install any software with apt-get install. I have been searching high and low, and have found several threads covering similar issues. How to fix a ruined package catalog? is one, Update manager generates 404 error while attempting update. Will not update is another, Ubuntu 11.10 Update issue (failed to fetch...) is a third I have tried changing my software source download location to "Main Server" rather than "Server for United States", to no avail. Same errors. Tried sudo apt-get clean, sudo apt-get autoclean, Have done a sudo rm /var/lib/apt/lists/*, still having the exact same problem. As I said, this is a brand new installation as of yesterday evening. Since I know it will be needed, here is my output from a sudo apt-get update: evan@ubuntu:~$ sudo apt-get update Ign http://archive.ubuntu.com oneiric InRelease Ign http://archive.ubuntu.com oneiric-updates InRelease Ign http://archive.ubuntu.com oneiric-backports InRelease Ign http://archive.ubuntu.com oneiric-security InRelease Ign http://archive.ubuntu.com oneiric Release.gpg Ign http://archive.ubuntu.com oneiric-updates Release.gpg Ign http://archive.ubuntu.com oneiric-backports Release.gpg Ign http://archive.ubuntu.com oneiric-security Release.gpg Ign http://archive.ubuntu.com oneiric Release Ign http://archive.ubuntu.com oneiric-updates Release Ign http://archive.ubuntu.com oneiric-backports Release Ign http://archive.ubuntu.com oneiric-security Release Ign http://archive.ubuntu.com oneiric/main TranslationIndex Ign http://archive.ubuntu.com oneiric/multiverse TranslationIndex Ign http://archive.ubuntu.com oneiric/restricted TranslationIndex Ign http://archive.ubuntu.com oneiric/universe TranslationIndex Ign http://archive.ubuntu.com oneiric-updates/main TranslationIndex Ign http://archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Ign http://archive.ubuntu.com oneiric-updates/restricted TranslationIndex Ign http://archive.ubuntu.com oneiric-updates/universe TranslationIndex Ign http://archive.ubuntu.com oneiric-backports/main TranslationIndex Ign http://archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Ign http://archive.ubuntu.com oneiric-backports/restricted TranslationIndex Ign http://archive.ubuntu.com oneiric-backports/universe TranslationIndex Ign http://archive.ubuntu.com oneiric-security/main TranslationIndex Ign http://archive.ubuntu.com oneiric-security/multiverse TranslationIndex Ign http://archive.ubuntu.com oneiric-security/restricted TranslationIndex Ign http://archive.ubuntu.com oneiric-security/universe TranslationIndex Err http://archive.ubuntu.com oneiric/main Sources 404 Not Found Err http://archive.ubuntu.com oneiric/restricted Sources 404 Not Found Err http://archive.ubuntu.com oneiric/universe Sources 404 Not Found Err http://archive.ubuntu.com oneiric/multiverse Sources 404 Not Found Err http://archive.ubuntu.com oneiric/main amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/universe amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/multiverse amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/main i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/restricted i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/universe i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric/multiverse i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/main Sources 404 Not Found Err http://archive.ubuntu.com oneiric-updates/restricted Sources 404 Not Found Err http://archive.ubuntu.com oneiric-updates/universe Sources 404 Not Found Err http://archive.ubuntu.com oneiric-updates/multiverse Sources 404 Not Found Err http://archive.ubuntu.com oneiric-updates/main amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/restricted amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/universe amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/multiverse amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/main i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/restricted i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/universe i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-updates/multiverse i386 Packages 404 Not Found Ign http://extras.ubuntu.com oneiric InRelease Err http://archive.ubuntu.com oneiric-backports/main Sources 404 Not Found Err http://archive.ubuntu.com oneiric-backports/restricted Sources 404 Not Found Err http://archive.ubuntu.com oneiric-backports/universe Sources 404 Not Found Err http://archive.ubuntu.com oneiric-backports/multiverse Sources 404 Not Found Err http://archive.ubuntu.com oneiric-backports/main amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/restricted amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/universe amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/multiverse amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/main i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/restricted i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/universe i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-backports/multiverse i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/main Sources 404 Not Found Err http://archive.ubuntu.com oneiric-security/restricted Sources 404 Not Found Err http://archive.ubuntu.com oneiric-security/universe Sources 404 Not Found Err http://archive.ubuntu.com oneiric-security/multiverse Sources 404 Not Found Err http://archive.ubuntu.com oneiric-security/main amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/restricted amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/universe amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/multiverse amd64 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/main i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/restricted i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/universe i386 Packages 404 Not Found Err http://archive.ubuntu.com oneiric-security/multiverse i386 Packages 404 Not Found Ign http://archive.ubuntu.com oneiric/main Translation-en_US Ign http://archive.ubuntu.com oneiric/main Translation-en Ign http://archive.ubuntu.com oneiric/multiverse Translation-en_US Ign http://archive.ubuntu.com oneiric/multiverse Translation-en Ign http://archive.ubuntu.com oneiric/restricted Translation-en_US Ign http://archive.ubuntu.com oneiric/restricted Translation-en Ign http://archive.ubuntu.com oneiric/universe Translation-en_US Ign http://archive.ubuntu.com oneiric/universe Translation-en Ign http://archive.ubuntu.com oneiric-updates/main Translation-en_US Ign http://archive.ubuntu.com oneiric-updates/main Translation-en Ign http://archive.ubuntu.com oneiric-updates/multiverse Translation-en_US Ign http://archive.ubuntu.com oneiric-updates/multiverse Translation-en Ign http://archive.ubuntu.com oneiric-updates/restricted Translation-en_US Ign http://archive.ubuntu.com oneiric-updates/restricted Translation-en Ign http://archive.ubuntu.com oneiric-updates/universe Translation-en_US Ign http://archive.ubuntu.com oneiric-updates/universe Translation-en Ign http://archive.ubuntu.com oneiric-backports/main Translation-en_US Ign http://archive.ubuntu.com oneiric-backports/main Translation-en Ign http://archive.ubuntu.com oneiric-backports/multiverse Translation-en_US Ign http://archive.ubuntu.com oneiric-backports/multiverse Translation-en Ign http://archive.ubuntu.com oneiric-backports/restricted Translation-en_US Ign http://archive.ubuntu.com oneiric-backports/restricted Translation-en Ign http://archive.ubuntu.com oneiric-backports/universe Translation-en_US Ign http://archive.ubuntu.com oneiric-backports/universe Translation-en Ign http://archive.ubuntu.com oneiric-security/main Translation-en_US Ign http://archive.ubuntu.com oneiric-security/main Translation-en Ign http://archive.ubuntu.com oneiric-security/multiverse Translation-en_US Ign http://archive.ubuntu.com oneiric-security/multiverse Translation-en Ign http://archive.ubuntu.com oneiric-security/restricted Translation-en_US Ign http://archive.ubuntu.com oneiric-security/restricted Translation-en Ign http://archive.ubuntu.com oneiric-security/universe Translation-en_US Ign http://archive.ubuntu.com oneiric-security/universe Translation-en Hit http://extras.ubuntu.com oneiric Release.gpg Hit http://extras.ubuntu.com oneiric Release Hit http://extras.ubuntu.com oneiric/main Sources Hit http://extras.ubuntu.com oneiric/main amd64 Packages Hit http://extras.ubuntu.com oneiric/main i386 Packages Ign http://extras.ubuntu.com oneiric/main TranslationIndex Ign http://extras.ubuntu.com oneiric/main Translation-en_US Ign http://extras.ubuntu.com oneiric/main Translation-en W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/main/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/restricted/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/universe/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/universe/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/main/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/restricted/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/universe/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/multiverse/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/restricted/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/universe/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/multiverse/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/restricted/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/universe/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-backports/multiverse/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/main/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/restricted/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/universe/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/source/Sources 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-amd64/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/restricted/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/universe/binary-i386/Packages 404 Not Found W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/oneiric-security/multiverse/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Here is my /etc/apt/source.list: evan@ubuntu:~$ cat /etc/apt/sources.list # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu oneiric main restricted deb-src http://archive.ubuntu.com/ubuntu oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu oneiric-updates main restricted deb-src http://archive.ubuntu.com/ubuntu oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu oneiric universe deb-src http://archive.ubuntu.com/ubuntu oneiric universe deb http://archive.ubuntu.com/ubuntu oneiric-updates universe deb-src http://archive.ubuntu.com/ubuntu oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu oneiric multiverse deb-src http://archive.ubuntu.com/ubuntu oneiric multiverse deb http://archive.ubuntu.com/ubuntu oneiric-updates multiverse deb-src http://archive.ubuntu.com/ubuntu oneiric-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu oneiric-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu oneiric-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu oneiric-security main restricted deb-src http://archive.ubuntu.com/ubuntu oneiric-security main restricted deb http://archive.ubuntu.com/ubuntu oneiric-security universe deb-src http://archive.ubuntu.com/ubuntu oneiric-security universe deb http://archive.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://archive.ubuntu.com/ubuntu oneiric-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu oneiric main deb-src http://extras.ubuntu.com/ubuntu oneiric main And here is my output from lsb_release -a: evan@ubuntu:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric If anyone could help me out here, that would be wonderful!

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Wednesday, February 17, 2010

    CodePlex Daily Summary for Wednesday, February 17, 2010New ProjectsAcademic Success Accounting System: The system is intended to use by school teacher to set marks to students and estimate their academic success and possibilities. The client applicat...Access.PowerTools: Access PowerTools is currently a sample MS Access add-in project to try & test features of Add-in Express™ 2009 for Microsoft® Office and .net (ht...AntoonCms: AntoonCms makes it easy to maintain a simple website with it's builtin administration pages. It's developed in C# on target Framework 2.0 The CMS...ASP.NET MVC Mehr Lib: Mehr Lib makes it easier for ASP.NET MVC developers to do develop projects. It's developed in C#. This version currently include Ajax master detail...BCryptTool: Developer tool that calculates BCrypt hash codes for strings. BCrypt is an implementation of the Blowfish cipher and a computationally-expensive ha...Coronasoft Cryostasis scripting engine: A scripting engine that allows you to dynamically load plugins from just about any supported .NET language. Its written in C#. Languages supported ...Critical Point Search: Critical Point Searchcritical points: critical pointsFont Family Name Retrieval: This library helps developer to retrieve the font family name from the TTF, OTF and TTC font files, so that developer can display the font without ...jQuery Form Input Hints Plugin: Automatically display hints on input textboxes in your forms using this jQuery plugin. I wrote this code to be as simple and as easy to use as pos...Kojax: kojax projectKronRetro: KronRetro! Making a Habbo Retro just got easier! Powered by PHP & MySQL you can make a Habbo Retro site fast!MVVM Wrapper Kit: MVVM Wrapper Kit makes it easier for View Model programmers to wrap their business objects and collections while preserving change notification and...ObjectCartographer: ObjectCartographer is an object to object mapper and object factory. It's developed in C#.PE-file Reader Writer API (PERWAPI): PERWAPI is a reader writer module for .NET program executables. It has been used as back-end for progamming language compilers such as Gardens Poi...Pinger: A simple Pinger, pings an address until you press a buttonQPV: 0.1: QPV aka Que pelicula es una aplicacion que consiste crear una base de datos potente de peliculas, criticas e informacion para poder filtrar pelicul...SIMD Detector: This SIMD class helps developers to detect the types of SIMD instruction available on users' processor. It supports Intel and AMD CPUs. It is writt...StackOverflow Test Project: Following Andrew Siemer's StackOverflow Knowledge Exchange Project.WeBlog: A blogging platform built on the MVC framework The project will showcase current technologies such as MVC 2, Silverlight 4 and jQuery 1.4. Data pro...Webmedia: this is my webmedia projectWindows Azure RSS Reader: This is and online RSS reader based on the Windows Azure platformWordEditor. A Word Editor for Windows, and an extended RichTextBox control.: This is a word editor that can be used as a stand alone word processor, or added to an existing project.Домашняя Бухгалтерия: Программа для ведения домашнего бухгалтерского учета финансов. New ReleasesAccess.PowerTools: Access PowerTools Add-In Community Edition v.0.0.1: Access PowerTools Add-In Community Edition v.0.0.1 is a sample MS Access add-in project to try & test Add-in Express™ 2009 for Microsoft® Office an...Active CSS: ActiveCSS-0.1.1: revision for version 0.1ASP.NET: Microsoft Ajax Minifier 4.0: The Microsoft Ajax Minifier enables you to improve the performance of your Ajax applications by reducing the size of your Cascading Style Sheet and...ASP.NET MVC Mehr Lib: V1.0: Mehr Lib V1.0 This version currently include ajax master detail combo facilities.ASP.net Ribbon: Version 1.2: New controls : Expandable gallery Color Picker Multi color File Menu Some JS modifications. Some CSS modifications. Includes some functionna...ASP.NET Web Forms Model-View-Presenter (MVP) Contrib: WebForms MVP Contrib CTP6: This is a release of the WebForms MVP Contrib project for WebForms MVP CTP6. Release includes: WebForms MVP Contrib framework Ninject IoC containerAwesomiumDotNet: AwesomiumDotNet 1.2.1: - Added Awesomium 1.5 features: URL filtering, header rewrite rules, SetOpensExternalLinksInCallingFrame. - Numerous fixes and improvements.BCryptTool: BCryptTool v0.1: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Buzz Dot Net: Buzz Dot Net v.1.10216: Features Parse Google Buzz feed to Objects Partial MVVM Implementation Partial OptimizationsCanvas VSDOC Intellisense: v1.0.0.0a: canvas-vsdoc.js and canvas-utils.js JavaScript intellisense for HTML5 Canvas element.CheckHeader: CheckHeader v0.8.5: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Claymore MVP: Claymore 1.0.2.0: Changelog Added ASP.NET WebForm support via ClaymoreHttpModule class. Added xsd schema for Visual Studio Intellisense within App.config and Web....Dam Gd - URL Shortner: Dam.gd Version 1.1: This is the latest instalment in our URL shortner. It uses The Easy API http://theeasyapi.com to access data that is used for the back-end analyti...D-AMPS: D-AMPS 0.9.1: Initial version.easySMS: easySMS 1.0 Source code: easySMS 1.0 Source codeFont Family Name Retrieval: 1st Release: Version 1.0.0Free Silverlight & WPF Chart Control - Visifire: Visifire Now Supports DataBinding: Hi, Today we are releasing the much awaited DataBinding feature in Visifire 3.0.3 beta 3. Now you can Bind any DataSource at the Series level so t...GenerateTypedBamApi: Version 2.0: Changes in this release: NEW: Export functionality no longer requires Excel to be installed (uses OLE DB vs. Excel Automation; also enables usage i...Gmail Notifier 2: GmailNotifier2 1.2.1: Fixes issues #9652, #9653iTuner - The iTunes Companion: iTuner 1.1.3699: This includes the first pass of the iTuner Librarian including management of dead tracks, duplicates, and empty directories... While I promised a ...jQuery Form Input Hints Plugin: jQuery.InputHints v1.0: jQuery.InputHints v1.0 Includes Standard & minified source Demo HTML file VS2008 SolutionLibWowArmory: LibWowArmory 0.2.3 beta: LibWowArmory 0.2.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.2. Changes since version 0.2.2:Update...Managed Extensibility Framework: MEF Preview 9: We have merged the .net 3.5 and Silverlight 3 into a single zip. The bin folder contains the binaries for .net 3.5 whereas bin\SL contains the bina...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-1.3.3.0-6571.msi: February 16, 2010 * MdxDesigner: Fix for the issue where when an element is clicked, the mouse wheel stops working until the cursor leaves and r...MEFGeneric: MEFGeneric Preview 9: MEFGeneric Preview 9 release.Mesopotamia Experiment: Mesopotamia 1.2.26: Bug Fixes - mud map - progress window - recycle app domains on robotics engine crashes( in command prompt and visual, major work) - fixed rooomba h...Microsoft Solution Framework for Business Intelligence in Media: Release 1.0: This is the public release of the Microsoft Solution Framework for Business Intelligence in Media (Release 1.0).MVVM Wrapper Kit: MVVM Wrapper Beta: A simple test project is included to get you up and running, and wrapping those business objects.nBayes - Bayesian Filtering in C#: nBayes v0.2: nBayes' indexing system is factored in such a way that you can easily replace the index with a custom implementation. This release introduces an ad...NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.5: 3.6.0.5 16-February-2010 - Fix: SqlAzManSid Class. "Equals" matches object signiture instead of IAzManSid signiture. When a real null object is pas...ObjectCartographer: ObjectCartographer Code 1.0: This is the first release and contains code to help with object to object mapping (including mapping from one object to multiple objects), object f...Office Apps: 0.8.6: Bug fix's, added Calendar.OI - Open Internet: OI HTML and .XAP files (OI offline): this is the HTML code and the XAP file. please right-click the app at http://bit.ly/openinternet and select "install openinternet application to th...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.3: Perwapi version 1.1.3 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Pinger: Pinger 1.0.0.0 Binary: The Latest BinaryRNA Comparative Analysis Software Tools: RNA Comparative Analysis Software Tools 2.0: RNA Comparative Analysis Software Tools Version 2.0 Note: The RNA Comparative Analysis Software Tools are provided as is, without any warranty. No...SAL- Self Artificial Learning: Artificial Learning working proof of concept: This is a working proof of concept. It includes the Dev version (in .zip format) and the consumer version (in .exe format)SharePoint Management PowerShell scripts: SharePoint 2010 PowerShell Scripts: All the SharePoint 2010 PowerShell Scripts The first file is an Excel 2010 file allowing to find quiclky and easily the new cmdlets available wi...SIMD Detector: 1st Release: Version 1.3 Supports MMX/MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a, SSE5, 3DNow.Terminals: Terminals 1.9 Beta Release: This is a beta release so the new features being added to terminals can be tested properly. The major change in this release is that Terminals has...Text Designer Outline Text Library: 9th minor release: Added the ability to select brush, such as gradient brush or texture brush for the text body. Added CSharp library, TextDesignerCSLibrary. Manage...VivoSocial: VivoSocial 7.0.2: This release has several updated modules. See the Support Forums for more details. Since we update modules very often, we will be changing how we d...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.6.00: changes CKEditor Upgrade to Version 3.2 SVN 5132 File Browser: After File Upload, File will be Auto Selected File Browser: Icons are corrected ...WordEditor. A Word Editor for Windows, and an extended RichTextBox control.: WordEditor Source Code: This contains the latest solution file, with all project files included.Домашняя Бухгалтерия: Alapha Realease: Принимаются ваши предложения по дизайну и функциональности программы.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsMicrosoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSimple SavantNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelSharpyjQuery Library for SharePoint Web ServicesFluent Validation for .NET

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >