Search Results

Search found 3459 results on 139 pages for 'if modified since'.

Page 137/139 | < Previous Page | 133 134 135 136 137 138 139  | Next Page >

  • XMLPULLPARSEREXCEPTION...in KSOAP2

    - by aka47
    iam using KSOAP2 for web services. my client is BlackBerry Phone and Server is KeyRingLabs.com. i am using php page for connection...i have taken this code form a Forum.and modified it according to my requirements...but I am having XMLPULLPARSER EXCEPTION...can any body help??? here is my code.... import net.rim.device.api.ui.; import net.rim.device.api.ui.component.; import net.rim.device.api.ui.container.; import net.rim.device.api.system.; import java.util.; import org.ksoap2.; import org.ksoap2.serialization.; import org.ksoap2.transport.; import java.io.IOException; import org.ksoap2.SoapEnvelope; import org.ksoap2.SoapFault; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.HttpTransport; import org.xmlpull.v1.XmlPullParserException; final class StockQuoteDemo extends UiApplication { public static void main (String[] args) { StockQuoteDemo theApp = new StockQuoteDemo (); theApp.enterEventDispatcher (); } public StockQuoteDemo () { pushScreen (new StockQuoteScreen ()); //doSOAP(); } final class StockQuoteScreen extends MainScreen { public static final String action = "http://keyringlabs.com/Login"; public static final String namespaceRoot = "bbpointofsale.com"; //public static final String webroot = "http://192.168.1.2/bbpointofsale.com/"; public static final String webroot = "http://192.168.0.35/"; //public static final String webroot = "http://www.bbpointofsale.com"; public String errorMessage; public String key; public String transactionID; private HttpTransport transport; private SoapSerializationEnvelope envelope; public StockQuoteScreen () { //transport = new HttpTransport(webroot + "bb/service/index.php"); transport = new HttpTransport(webroot+"Disk/rashid11/index4.php"); transport.debug = true; envelope = new SoapSerializationEnvelope(SoapEnvelope.VER12); key = null; envelope.encodingStyle = SoapSerializationEnvelope.XSD1999; ProcessLogin("[email protected]","123456"); //Dialog.alert("GEN 1"); //Dialog.alert("Warr Gai Vai!!!"); } public boolean onClose () { Dialog.alert ("Goodbye!"); System.exit (0); return true; } public boolean ProcessLogin(String email, String password) { System.err.println("Starting The Process"); errorMessage = ""; String namespace = "urn:" + namespaceRoot + ":login"; //System.err.println("LINK:"+namespace); // SoapObject message = new SoapObject(namespace, "login"); SoapObject message = new SoapObject(namespaceRoot, "login"); message.addProperty("email", email); message.addProperty("password", password); envelope.bodyOut = message; // System.err.println("KSOAP:"+ envelope.toString()); //String soapAction = namespace + "#login"; String soapAction = "http://bbpointofsale.com/login"; // System.err.println("Action : "+soapAction); try { //transport.setXmlVersionTag(""); transport.call(soapAction, envelope); } catch (IOException e) { e.printStackTrace(); System.out.println("error: "+e.getMessage()); errorMessage = e.getMessage(); System.out.println("response1: "+transport.responseDump); return false; } catch (XmlPullParserException e) { e.printStackTrace(); errorMessage = e.getMessage(); System.out.println("request2: "+transport.requestDump); System.out.println("response2: "+transport.responseDump); return false; } try { SoapObject result = (SoapObject) ((SoapObject)envelope.getResponse()).getProperty(0); key = hackToGetResponse("serviceToken", result.toString()); if (key.length() > 0) { System.out.println("KEY:" + key); return true; } else { } } catch (SoapFault e) { errorMessage = e.getMessage(); System.out.println("response3: "+transport.responseDump); return false; } catch (Exception e) { errorMessage = e.getMessage(); System.err.println("response4: "+transport.responseDump); return false; } return false; } public String hackToGetResponse(String key, String response) { System.out.println("hackToGetResponse:" + response); String start = "anyType{key=" + key + "; value="; String end = "; }"; if (response.indexOf(start) == -1 || response.indexOf(end) == -1) return ""; System.out.println("hackToGetResponse:" + "response.substring(0, " + response.indexOf(start) + ").substring(0, " + response.indexOf(end) + ");"); response = response.substring(response.indexOf(start) + start.length()); response = response.substring(0, response.indexOf(end)); if (response.indexOf("anyType{}") != -1) return ""; return response; } } } //******************PHP FILE************************ $server = new SoapServer(null, array('uri' = "urn:keyringlabs.com")); //$server = new SoapServer(null, array('uri' = "urn: bbpointofsale.com")); $server-addFunction("login"); //$email='[email protected]'; //$pass='123456'; function login($email, $pass) { if (strlen($email) == 0) { return Array('serviceToken' => ''); } elseif (strlen($pass) == 0) { return Array('serviceToken' => ''); } else { $objMerchant = Merchant::LoadByEmailPassword($email, $pass); if ($objMerchant == null || $objMerchant->Id &lt==1) { return Array('serviceToken' => ''); } else { $key = uniqid(); $objSess = new Merchantsessions(); $objSess->MerchantID = $objMerchant->Id; $objSess->ServiceToken = $key; $objSess->Save(); } } $result = Array('serviceToken' => $key); //print $result; return $result; } ? ///**************************************** is there any need of an XML page or something..to run it perfectly...please help thank you for your time!

    Read the article

  • Anyone have ideas for solving the "n items remaining" problem on Internet Explorer?

    - by CMPalmer
    In my ASP.Net app, which is javascript and jQuery heavy, but also uses master pages and .Net Ajax pieces, I am consistently seeing on the status bar of IE 6 (and occasionally IE 7) the message "2 items remaining" or "15 items remaining" followed by "loading somegraphicsfile.png|gif ." This message never goes away and may or may not prevent some page functionality from running (it certainly seems to bog down, but I'm not positive). I can cause this to happen 99% of the time by just refreshing an .aspx age, but the number of items and, sometimes, the file it mentions varies. Usually it is 2, 3, 12, 13, or 15. I've Googled for answers and there are several suggestions or explanations. Some of them haven't worked for us, and others aren't practical for us to implement or try. Here are some of the ideas/theories: IE isn't caching images right, so it repeatedly asks for the same image if the image is repeated on the page and the server assumes that it should be cached locally since it's already served it in that page context. IE displays the images correctly, but sits and waits for a server response that never comes. Typically the file it says it is waiting on is repeated on the page. The page is using PNG graphics with transparency. Indeed it is, but they are jQuery-UI Themeroller generated graphics which, according to the jQuery-UI folks, are IE safe. The jQuery-UI components are the only things using PNGs. All of our PNG references are in CSS, if that helps. I've changed some of the graphics from PNG to GIF, but it is just as likely to say it's waiting for somegraphicsfile.png as it is for somegraphicsfile.gif Images are being specified in CSS and/or JavaScript but are on things that aren't currently being displayed (display: none items for example). This may be true, but if it is, then I would think preloading images would work, but so far, adding a preloader doesn't do any good. IIS's caching policy is confusing the browser. If this is true, it is only Microsoft server SW having problems with Microsoft's browser (which doesn't surprise me at all). Unfortunately, I don't have much control over the IIS configuration that will be hosting the app. Has anyone seen this and found a way to combat it? Particularly on ASP.Net apps with jQuery and jQuery-UI? UPDATE One other data point: on at least one of the pages, just commenting out the jQuery-UI Datepicker component setup causes the problem to go away, but I don't think (or at least I'm not sure) if that fixes all of the pages. If it does "fix" them, I'll have to swap out plug-ins because that functionality needs to be there. There doesn't seem to be any open issues against jQuery-UI on IE6/7 currently... UPDATE 2 I checked the IIS settings and "enable content expiration" was not set on any of my folders. Unchecking that setting was a common suggestion for fixing this problem. I have another, simpler, page that I can consistently create the error on. I'm using the jQuery-UI 1.6rc6 file (although I've also tried jQuery-UI 1.7.1 with the same results). The problem only occurs when I refresh the page that contains the jQuery-UI Datepicker. If I comment out the Datepicker setup, the problem goes away. Here are a few things I notice when I do this: This page always says "(1 item remaining) Downloading picture http:///images/Calendar_scheduleHS.gif", but only when reloading. When I look at HTTP logging, I see that it requests that image from the server every time it is dynamically turned on, without regard to caching. All of the requests for that graphic are complete and return the graphic correctly. None are marked code 200 or 304 (indicating that the server is telling IE to use the cached version). Why it says waiting on that graphic when all of the requests have completed I have no idea. There is a single other graphic on the page (one of the UI PNG files) that has a code 304 (Not Modified). On another page where I managed to log HTTP traffic with "2 items remaining", two different graphic files (both UI PNGs) had a 304 as well (but neither was the one listed as "Downloading". This error is not innocuous - the page is not fully responsive. For example, if I click on one of the buttons which should execute a client-side action, the page refreshes. Going away from the page and coming back does not produce the error. I have moved the script and script references to the bottom of the content and this doesn't affect this problem. The script is still running in the $(document).ready() though (it's too hairy to divide out unless I absolutely have to). FINAL UPDATE AND ANSWER There were a lot of good answers and suggestions below, but none of them were exactly our problem. The closest one (and the one that led me to the solution) was the one about long running JavaScript, so I awarded the bounty there (I guess I could have answered it myself, but I'd rather reward info that leads to solutions). Here was our solution: We had multiple jQueryUI datepickers that were created on the $(document).ready event in script included from the ASP.Net master page. On this client page, a local script's $(document).ready event had script that destroyed the datepickers under certain conditions. We had to use "destroy" because the previous version of datepicker had a problem with "disable". When we upgraded to the latest version of jQuery UI (1.7.1) and replaced the "destroy"s with "disable"s for the datepickers, the problem went away (or mostly went away - if you do things too fast while the page is loading, it is still possible to get the "n items remaining" status). My theory as to what was happening goes like this: The page content loads and has 12 or so text boxes with the datepicker class. The master page script creates datepickers on those text boxes. IE queues up requests for each calendar graphic independently because IE doesn't know how to properly cache dynamic image requests. Before the requests get processed, the client area script destroys those datepickers so the graphics are no longer needed. IE is left with some number of orphaned requests that it doesn't know what to do with.

    Read the article

  • Compiling OpenCV in Android NDK

    - by evident
    PLEASE SEE THE ADDITIONS AT THE BOTTOM! The first problem is solved in Linux, not under Windows and Cygwin yet, but there is a new problem. Please see below! I am currently trying to compile OpenCV for Android NDK so that I can use it in my apps. For this I tried to follow this guide: http://www.stanford.edu/~zxwang/android_opencv.html But when compiling the downloaded stuff with ndk-build I get this error: $ /cygdrive/u/flori/workspace/android-ndk-r5b/ndk-build Compile++ thumb : opencv <= cvjni.cpp Compile++ thumb : cxcore <= cxalloc.cpp Compile++ thumb : cxcore <= cxarithm.cpp Compile++ thumb : cxcore <= cxarray.cpp Compile++ thumb : cxcore <= cxcmp.cpp Compile++ thumb : cxcore <= cxconvert.cpp Compile++ thumb : cxcore <= cxcopy.cpp Compile++ thumb : cxcore <= cxdatastructs.cpp Compile++ thumb : cxcore <= cxdrawing.cpp Compile++ thumb : cxcore <= cxdxt.cpp Compile++ thumb : cxcore <= cxerror.cpp Compile++ thumb : cxcore <= cximage.cpp Compile++ thumb : cxcore <= cxjacobieigens.cpp Compile++ thumb : cxcore <= cxlogic.cpp Compile++ thumb : cxcore <= cxlut.cpp Compile++ thumb : cxcore <= cxmathfuncs.cpp Compile++ thumb : cxcore <= cxmatmul.cpp Compile++ thumb : cxcore <= cxmatrix.cpp Compile++ thumb : cxcore <= cxmean.cpp Compile++ thumb : cxcore <= cxmeansdv.cpp Compile++ thumb : cxcore <= cxminmaxloc.cpp Compile++ thumb : cxcore <= cxnorm.cpp Compile++ thumb : cxcore <= cxouttext.cpp Compile++ thumb : cxcore <= cxpersistence.cpp Compile++ thumb : cxcore <= cxprecomp.cpp Compile++ thumb : cxcore <= cxrand.cpp Compile++ thumb : cxcore <= cxsumpixels.cpp Compile++ thumb : cxcore <= cxsvd.cpp Compile++ thumb : cxcore <= cxswitcher.cpp Compile++ thumb : cxcore <= cxtables.cpp Compile++ thumb : cxcore <= cxutils.cpp StaticLibrary : libstdc++.a StaticLibrary : libcxcore.a Compile++ thumb : cv <= cvaccum.cpp Compile++ thumb : cv <= cvadapthresh.cpp Compile++ thumb : cv <= cvapprox.cpp Compile++ thumb : cv <= cvcalccontrasthistogram.cpp Compile++ thumb : cv <= cvcalcimagehomography.cpp Compile++ thumb : cv <= cvcalibinit.cpp Compile++ thumb : cv <= cvcalibration.cpp Compile++ thumb : cv <= cvcamshift.cpp Compile++ thumb : cv <= cvcanny.cpp Compile++ thumb : cv <= cvcolor.cpp Compile++ thumb : cv <= cvcondens.cpp Compile++ thumb : cv <= cvcontours.cpp Compile++ thumb : cv <= cvcontourtree.cpp Compile++ thumb : cv <= cvconvhull.cpp Compile++ thumb : cv <= cvcorner.cpp Compile++ thumb : cv <= cvcornersubpix.cpp Compile++ thumb : cv <= cvderiv.cpp Compile++ thumb : cv <= cvdistransform.cpp Compile++ thumb : cv <= cvdominants.cpp Compile++ thumb : cv <= cvemd.cpp Compile++ thumb : cv <= cvfeatureselect.cpp Compile++ thumb : cv <= cvfilter.cpp Compile++ thumb : cv <= cvfloodfill.cpp Compile++ thumb : cv <= cvfundam.cpp Compile++ thumb : cv <= cvgeometry.cpp Compile++ thumb : cv <= cvhaar.cpp Compile++ thumb : cv <= cvhistogram.cpp Compile++ thumb : cv <= cvhough.cpp Compile++ thumb : cv <= cvimgwarp.cpp Compile++ thumb : cv <= cvinpaint.cpp Compile++ thumb : cv <= cvkalman.cpp Compile++ thumb : cv <= cvlinefit.cpp Compile++ thumb : cv <= cvlkpyramid.cpp Compile++ thumb : cv <= cvmatchcontours.cpp Compile++ thumb : cv <= cvmoments.cpp Compile++ thumb : cv <= cvmorph.cpp Compile++ thumb : cv <= cvmotempl.cpp Compile++ thumb : cv <= cvoptflowbm.cpp Compile++ thumb : cv <= cvoptflowhs.cpp Compile++ thumb : cv <= cvoptflowlk.cpp Compile++ thumb : cv <= cvpgh.cpp Compile++ thumb : cv <= cvposit.cpp Compile++ thumb : cv <= cvprecomp.cpp Compile++ thumb : cv <= cvpyramids.cpp Compile++ thumb : cv <= cvpyrsegmentation.cpp Compile++ thumb : cv <= cvrotcalipers.cpp Compile++ thumb : cv <= cvsamplers.cpp Compile++ thumb : cv <= cvsegmentation.cpp Compile++ thumb : cv <= cvshapedescr.cpp Compile++ thumb : cv <= cvsmooth.cpp Compile++ thumb : cv <= cvsnakes.cpp Compile++ thumb : cv <= cvstereobm.cpp Compile++ thumb : cv <= cvstereogc.cpp Compile++ thumb : cv <= cvsubdivision2d.cpp Compile++ thumb : cv <= cvsumpixels.cpp Compile++ thumb : cv <= cvsurf.cpp Compile++ thumb : cv <= cvswitcher.cpp Compile++ thumb : cv <= cvtables.cpp Compile++ thumb : cv <= cvtemplmatch.cpp Compile++ thumb : cv <= cvthresh.cpp Compile++ thumb : cv <= cvundistort.cpp Compile++ thumb : cv <= cvutils.cpp StaticLibrary : libcv.a SharedLibrary : libopencv.so U:/flori/workspace/android-ndk-r5b/toolchains/arm-linux-androideabi-4.4.3/prebui lt/windows/bin/../lib/gcc/arm-linux-androideabi/4.4.3/../../../../arm-linux-andr oideabi/bin/ld.exe: cannot find -lcxcore collect2: ld returned 1 exit status make: *** [/cygdrive/u/flori/workspace/android/testOpenCV/obj/local/armeabi/libo pencv.so] Error 1 I am trying to compile it on a Windows system and with the newest NDK version... Does anybody have an idea what this linking error means and what I can to to have it work again? Would be great if anybody could help After getting the problem to work I found that there is another way of compiling OpenCV for Android, using the current version of OpenCV (instead of the 1.1 one from above) and the modified Android NDK from crystax, which supports STL and exceptions and therefore supports the newest OpenCV Version. All information on that can be found here: http://opencv.willowgarage.com/wiki/Android There it says to download the current svn trunk and the crystax-r4 android-ndk, as well as swig, which I did. I entered the folder, created the build directory, ran cmake and then built the static libs, which seemed to work. At least it successfully ran the make-command without errors. I now wanted to build the shared libraries so I entered the android-jni folder and ran 'make' again, but got this error: % make -j4 OPENCV_CONFIG = ../build/android-opencv.mk make clean-swig &&\ mkdir -p jni/gen &&\ mkdir -p src/com/opencv/jni &&\ swig -java -c++ -package "com.opencv.jni" \ -outdir src/com/opencv/jni \ -o jni/gen/android_cv_wrap.cpp jni/android-cv.i OPENCV_CONFIG = ../build/android-opencv.mk make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. rm -f jni/gen/android_cv_wrap.cpp make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi/libandroid-opencv.so] Error 2 make: *** Waiting for unfinished jobs.... make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi-v7a/libandroid-opencv.so] Error 2 Does anybody have an idea what this means and what I can do to build the shared libraries? ... Ok after having a look at the error message it came to me that it seems to have something missing in the build directory... but there wasn't even a build directory in the android folder so I created one, ran 'cmake' in there and 'make' again but get this error: Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/sgetrf.c Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/scopy.c Compile++ thumb: opencv_core <= /home/florian/android-opencv-willowgarage/modules/core/src/matrix.cpp cc1plus: error: /home/florian/android-opencv-willowgarage/android/../modules/index.rst/include: Not a directory make[3]: *** [/home/florian/android-opencv-willowgarage/android/build/obj/local/armeabi/objs/opencv_core/src/matrix.o] Error 1 make[3]: *** Waiting for unfinished jobs.... make[2]: *** [android-opencv] Error 2 make[1]: *** [CMakeFiles/ndk.dir/all] Error 2 make: *** [all] Error 2 Anybody know what this means?

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Converting a WPFToolkit DataGrid from 1D list to 2D matrix

    - by user61073
    Hello - I am wondering if anyone has attempted the following or has an idea as to how to do it. I have a WPFToolkit DataGrid which is bound to an ObservableCollection of items. As such, the DataGrid is shown with as many rows in the ObservableCollection, and as many columns as I have defined in for the DataGrid. That all is good. What I now need is to provide another view of the same data, only, instead, the DataGrid is shown with as many cells in the ObservableCollection. So let's say, my ObservableCollection has 100 items in it. The original scenario showed the DataGrid with 100 rows and 1 column. In the modified scenario, I need to show it with 10 rows and 10 columns, where each cell shows the value that was in the original representation. In other words, I need to transform my 1D ObservableCollection to a 2D ObservableCollection and display it in the DataGrid. I know how to do that programmatically in the code behind, but can it be done in XAML? Let me simplify the problem a little, in case anybody can have a crack at this. The XAML below does the following: * Defines an XmlDataProvider just for dummy data * Creates a DataGrid with 10 columns o each column is a DataGridTemplateColumn using the same CellTemplate * The CellTemplate is a simple TextBlock bound to an XML element If you run the XAML below, you will find that the DataGrid ends up with 5 rows, one for each book, and 10 columns that have identical content (all showing the book titles). However, what I am trying to accomplish, albeit with a different data set, is that in this case, I would end up with one row, with each book title appearing in a single cell in row 1, occupying cells 0-4, and nothing in cells 5-9. Then, if I added more data and had 12 books in my XML data source, I would get row 1 completely filled (cells covering the first 10 titles) and row 2 would get the first 2 cells filled. Can my scenario be accomplished primarily in XAML, or should I resign myself to working in the code behind? Any guidance would greatly be appreciated. Thanks so much! <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:custom="http://schemas.microsoft.com/wpf/2008/toolkit" mc:Ignorable="d" x:Name="UserControl" d:DesignWidth="600" d:DesignHeight="400" > <UserControl.Resources> <XmlDataProvider x:Key="InventoryData" XPath="Inventory/Books"> <x:XData> <Inventory xmlns=""> <Books> <Book ISBN="0-7356-0562-9" Stock="in" Number="9"> <Title>XML in Action</Title> <Summary>XML Web Technology</Summary> </Book> <Book ISBN="0-7356-1370-2" Stock="in" Number="8"> <Title>Programming Microsoft Windows With C#</Title> <Summary>C# Programming using the .NET Framework</Summary> </Book> <Book ISBN="0-7356-1288-9" Stock="out" Number="7"> <Title>Inside C#</Title> <Summary>C# Language Programming</Summary> </Book> <Book ISBN="0-7356-1377-X" Stock="in" Number="5"> <Title>Introducing Microsoft .NET</Title> <Summary>Overview of .NET Technology</Summary> </Book> <Book ISBN="0-7356-1448-2" Stock="out" Number="4"> <Title>Microsoft C# Language Specifications</Title> <Summary>The C# language definition</Summary> </Book> </Books> <CDs> <CD Stock="in" Number="3"> <Title>Classical Collection</Title> <Summary>Classical Music</Summary> </CD> <CD Stock="out" Number="9"> <Title>Jazz Collection</Title> <Summary>Jazz Music</Summary> </CD> </CDs> </Inventory> </x:XData> </XmlDataProvider> <DataTemplate x:Key="GridCellTemplate"> <TextBlock> <TextBlock.Text> <Binding XPath="Title"/> </TextBlock.Text> </TextBlock> </DataTemplate> </UserControl.Resources> <Grid x:Name="LayoutRoot"> <custom:DataGrid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" IsSynchronizedWithCurrentItem="True" Background="{DynamicResource WindowBackgroundBrush}" HeadersVisibility="All" RowDetailsVisibilityMode="Collapsed" SelectionUnit="CellOrRowHeader" CanUserResizeRows="False" GridLinesVisibility="None" RowHeaderWidth="35" AutoGenerateColumns="False" CanUserReorderColumns="False" CanUserSortColumns="False"> <custom:DataGrid.Columns> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="01" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="02" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="03" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="04" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="05" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="06" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="07" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="08" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="09" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="10" /> </custom:DataGrid.Columns> <custom:DataGrid.ItemsSource> <Binding Source="{StaticResource InventoryData}" XPath="Book"/> </custom:DataGrid.ItemsSource> </custom:DataGrid> </Grid>

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • xVal 1.0 not generating the correct xVal.AttachValidator script in view

    - by bastijn
    I'm currently implementing xVal client-side validation. The server-side validation is working correctly at the moment. I have referenced xVall.dll (from xVal1.0.zip) in my project as well as the System.ComponentModel.DataAnnotations and System.web.mvc.DataAnnotations from the Data Annotations Model Binder Sample found at http://aspnet.codeplex.com/releases/view/24471. I have modified the method BindProperty in the DataAnnotationsModelBinder class since it returned a nullpointer exception telling me the modelState object was null. Some blogposts described to modify the method and I did according to this SO post. Next I put the following lines in my global.asax: protected void Application_Start() { // kept same and added following line RegisterModelBinders(ModelBinders.Binders); // Add this line } public void RegisterModelBinders(ModelBinderDictionary binders) // Add this whole method { binders.DefaultBinder = new Microsoft.Web.Mvc.DataAnnotations.DataAnnotationsModelBinder(); } Now, I have made a partial class and a metadata class since I use the entity framework and you cannot create partial declarations as of yet so I have: [MetadataType(typeof(PersonMetaData))] public partial class Persons { // .... } public class PersonMetaData { private const string EmailRegEx = @"^(([^<>()[\]\\.,;:\s@\""]+" + @"(\.[^<>()[\]\\.,;:\s@\""]+)*)|(\"".+\""))@" + @"((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" + @"\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+" + @"[a-zA-Z]{2,}))$"; [Required] public string FirstName { get; set; } [Required] public string LastName { get; set; } [Required(ErrorMessage="Please fill in your email")] [RegularExpression(EmailRegEx,ErrorMessage="Please supply a valid email address")] public string Email { get; set; } } And in my controller I have the POST edit method which currently still use a FormCollection instead of a Persons object as input. I have to change this later on but due to time constraints and some strange bug this isnt done as of yet :). It shouldnt matter though. Below it is my view. // // POST: /Jobs/Edit/5 //[CustomAuthorize(Roles = "admin,moderator")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit([Bind(Exclude = "Id")]FormCollection form) { Persons person = this.GetLoggedInPerson(); person.UpdatedAt = DateTime.Now; // Update the updated time. TryUpdateModel(person, null, null, new string[]{"Id"}); if (ModelState.IsValid) { repository.SaveChanges(); return RedirectToAction("Index", "ControlPanel"); } return View(person); } #endregion My view contains a partial page containing the form. In my edit.aspx I have the following code: <div class="content"> <% Html.RenderPartial("PersonForm", Model); %> </div> </div> and in the .ascx partial page: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<WerkStageNu.Persons>" %> <% if (!Model.AddressesReference.IsLoaded) { %> <% Model.AddressesReference.Load(); %> <% } %> <%= Html.ValidationSummary("Edit was unsuccessful. Please correct the errors and try again.") %> <% using (Html.BeginForm()) {%> <fieldset> <legend>General information</legend> <table> <tr> <td><label for="FirstName">FirstName:</label></td><td><%= Html.TextBox("FirstName", Model.FirstName)%><%= Html.ValidationMessage("FirstName", "*")%></td> </tr> <tr> <td><label for="LastName">LastName:</label></td><td><%= Html.TextBox("LastName", Model.LastName)%><%= Html.ValidationMessage("LastName", "*")%></td> </tr> <tr> <td><label for="Email">Email:</label></td><td><%= Html.TextBox("Email", Model.Email)%><%= Html.ValidationMessage("Email", "*")%></td> </tr> <tr> <td><label for="Telephone">Telephone:</label></td><td> <%= Html.TextBox("Telephone", Model.Telephone) %><%= Html.ValidationMessage("Telephone", "*") %></td> </tr> <tr> <td><label for="Fax">Fax:</label></td><td><%= Html.TextBox("Fax", Model.Fax) %><%= Html.ValidationMessage("Fax", "*") %></td> </tr> </table> <%--<p> <label for="GenderID"><%= Html.Encode(Resources.Forms.gender) %>:</label> <%= Html.DropDownList("GenderID", Model.Genders)%> </p> --%> </fieldset> <fieldset> <legend><%= Html.Encode(Resources.Forms.addressinformation) %></legend> <table> <tr> <td><label for="Addresses.City"><%= Html.Encode(Resources.Forms.city) %>:</label></td><td><%= Html.TextBox("Addresses.City", Model.Addresses.City)%></td> </tr> <tr> <td><label for="Addresses.Street"><%= Html.Encode(Resources.Forms.street) %>:</label></td><td><%= Html.TextBox("Addresses.Street", Model.Addresses.Street)%></td> </tr> <tr> <td><label for="Addresses.StreetNo"><%= Html.Encode(Resources.Forms.streetNumber) %>:</label></td><td><%= Html.TextBox("Addresses.StreetNo", Model.Addresses.StreetNo)%></td> </tr> <tr> <td><label for="Addresses.Country"><%= Html.Encode(Resources.Forms.county) %>:</label></td><td><%= Html.TextBox("Addresses.Country", Model.Addresses.Country)%></td> </tr> </table> </fieldset> <p> <input type="image" src="../../Content/images/save_btn.png" /> </p> <%= Html.ClientSideValidation(typeof(WerkStageNu.Persons)) %> <% } % Still nothing really stunning over here. In combination with the edited data annotation dlls this gives me server-side validation working (although i have to manually exclude the "id" property as done in the TryUpdateModel). The strange thing is that it still generates the following script in my View: xVal.AttachValidator(null, {"Fields":[{"FieldName":"ID","FieldRules": [{"RuleName":"DataType","RuleParameters":{"Type":"Integer"}}]}]}, {}) While all the found blogposts on this ( 1, 2 ) but all of those are old posts and all say it should be fixed from xVal 0.8 and up. The last thing I found was this post but I did not really understand. I referenced using Visual Studio - add reference -- browse - selected from my bin dir where I stored the external compiled dlls (copied to the bin dir of my project). Can anyone tell me where the problem originates from? EDIT Adding the reference from the .NET tab fixed the problem somehow. While earlier adding from this tab resulted in a nullpointer error since it used the standard DataAnnotations delivered with the MVC1 framework instead of the freshly build one. Is it because I dropped the .dll in my bin dir that it now picks the correct one? Or why?

    Read the article

  • xVal 1.0 not generating the correct xVal.AttachValidator

    - by bastijn
    I'm currently implementing xVal client-side validation. The server-side validation is working correctly at the moment. I have referenced xVall.dll (from xVal1.0.zip) in my project as well as the System.ComponentModel.DataAnnotations and System.web.mvc.DataAnnotations from the Data Annotations Model Binder Sample found at http://aspnet.codeplex.com/releases/view/24471. I have modified the method BindProperty in the DataAnnotationsModelBinder class since it returned a nullpointer exception telling me the modelState object was null. Some blogposts described to modify the method and I did according to this SO post. Next I put the following lines in my global.asax: protected void Application_Start() { // kept same and added following line RegisterModelBinders(ModelBinders.Binders); // Add this line } public void RegisterModelBinders(ModelBinderDictionary binders) // Add this whole method { binders.DefaultBinder = new Microsoft.Web.Mvc.DataAnnotations.DataAnnotationsModelBinder(); } Now, I have made a partial class and a metadata class since I use the entity framework and you cannot create partial declarations as of yet so I have: [MetadataType(typeof(PersonMetaData))] public partial class Persons { // .... } public class PersonMetaData { private const string EmailRegEx = @"^(([^<>()[\]\\.,;:\s@\""]+" + @"(\.[^<>()[\]\\.,;:\s@\""]+)*)|(\"".+\""))@" + @"((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" + @"\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+" + @"[a-zA-Z]{2,}))$"; [Required] public string FirstName { get; set; } [Required] public string LastName { get; set; } [Required(ErrorMessage="Please fill in your email")] [RegularExpression(EmailRegEx,ErrorMessage="Please supply a valid email address")] public string Email { get; set; } } And in my controller I have the POST edit method which currently still use a FormCollection instead of a Persons object as input. I have to change this later on but due to time constraints and some strange bug this isnt done as of yet :). It shouldnt matter though. Below it is my view. // // POST: /Jobs/Edit/5 //[CustomAuthorize(Roles = "admin,moderator")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit([Bind(Exclude = "Id")]FormCollection form) { Persons person = this.GetLoggedInPerson(); person.UpdatedAt = DateTime.Now; // Update the updated time. TryUpdateModel(person, null, null, new string[]{"Id"}); if (ModelState.IsValid) { repository.SaveChanges(); return RedirectToAction("Index", "ControlPanel"); } return View(person); } #endregion My view contains a partial page containing the form. In my edit.aspx I have the following code: <div class="content"> <% Html.RenderPartial("PersonForm", Model); %> </div> </div> and in the .ascx partial page: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<WerkStageNu.Persons>" %> <% if (!Model.AddressesReference.IsLoaded) { % <% Model.AddressesReference.Load(); % <% } % <%= Html.ValidationSummary("Edit was unsuccessful. Please correct the errors and try again.") % <% using (Html.BeginForm()) {%> <fieldset> <legend>General information</legend> <table> <tr> <td><label for="FirstName">FirstName:</label></td><td><%= Html.TextBox("FirstName", Model.FirstName)%><%= Html.ValidationMessage("FirstName", "*")%></td> </tr> <tr> <td><label for="LastName">LastName:</label></td><td><%= Html.TextBox("LastName", Model.LastName)%><%= Html.ValidationMessage("LastName", "*")%></td> </tr> <tr> <td><label for="Email">Email:</label></td><td><%= Html.TextBox("Email", Model.Email)%><%= Html.ValidationMessage("Email", "*")%></td> </tr> <tr> <td><label for="Telephone">Telephone:</label></td><td> <%= Html.TextBox("Telephone", Model.Telephone) %><%= Html.ValidationMessage("Telephone", "*") %></td> </tr> <tr> <td><label for="Fax">Fax:</label></td><td><%= Html.TextBox("Fax", Model.Fax) %><%= Html.ValidationMessage("Fax", "*") %></td> </tr> </table> <%--<p> <label for="GenderID"><%= Html.Encode(Resources.Forms.gender) %>:</label> <%= Html.DropDownList("GenderID", Model.Genders)%> </p> --%> </fieldset> <fieldset> <legend><%= Html.Encode(Resources.Forms.addressinformation) %></legend> <table> <tr> <td><label for="Addresses.City"><%= Html.Encode(Resources.Forms.city) %>:</label></td><td><%= Html.TextBox("Addresses.City", Model.Addresses.City)%></td> </tr> <tr> <td><label for="Addresses.Street"><%= Html.Encode(Resources.Forms.street) %>:</label></td><td><%= Html.TextBox("Addresses.Street", Model.Addresses.Street)%></td> </tr> <tr> <td><label for="Addresses.StreetNo"><%= Html.Encode(Resources.Forms.streetNumber) %>:</label></td><td><%= Html.TextBox("Addresses.StreetNo", Model.Addresses.StreetNo)%></td> </tr> <tr> <td><label for="Addresses.Country"><%= Html.Encode(Resources.Forms.county) %>:</label></td><td><%= Html.TextBox("Addresses.Country", Model.Addresses.Country)%></td> </tr> </table> </fieldset> <p> <input type="image" src="../../Content/images/save_btn.png" /> </p> <%= Html.ClientSideValidation(typeof(WerkStageNu.Persons)) %> <% } % Still nothing really stunning over here. In combination with the edited data annotation dlls this gives me server-side validation working (although i have to manually exclude the "id" property as done in the TryUpdateModel). The strange thing is that it still generates the following script in my View: xVal.AttachValidator(null, {"Fields":[{"FieldName":"ID","FieldRules": [{"RuleName":"DataType","RuleParameters":{"Type":"Integer"}}]}]}, {}) While all the found blogposts on this ( 1, 2 ) but all of those are old posts and all say it should be fixed from xVal 0.8 and up. The last thing I found was this post but I did not really understand. I referenced using Visual Studio - add reference -- browse - selected from my bin dir where I stored the external compiled dlls (copied to the bin dir of my project). Can anyone tell me where the problem originates from?

    Read the article

  • does red5 read tomcat-users.xml

    - by baba
    Hi, I have been busy creating an app for Red5. Imagine what was my surprise when I tried to configure basic/digest authentication and I couldn't. What struck me as strange is that I have a running tomcat instance that works and authenticates correctly with the following xmls: web.xml (part of) <security-constraint> <web-resource-collection> <web-resource-name>A Protected Page</web-resource-name> <url-pattern>/stats.jsp</url-pattern> </web-resource-collection> <auth-constraint> <description/> <role-name>tomcat</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>DIGEST</auth-method> <realm-name>BLAAAAAAAAAAAAAAAAA</realm-name> </login-config> <security-role> <description/> <role-name>tomcat</role-name> </security-role> and a tomcat-users.xml in /conf that looks kinda like this: <?xml version="1.0" encoding="UTF-8"?> <tomcat-users> <role rolename="tomcat"/> <user username="ide" password="bogus" roles="tomcat"/> </tomcat-users> The annoying thing is that configuration authenticates correctly when on tomcat's servlet container, but on the red5's modified one, it just keeps asking for authentication. Am I becoming mad or it should work like a charm? Red5 is version 0_9_1 The stats.jsp is accessible in both servlet containers, the only difference is that when you input the correct password and username in tomcat, you are logged in, and in red5 you are not, it just keeps asking you for the password. Any pointers? Am I missing something? Here is a stack trace of the error I receive AT the moment I try the login: Caused by: java.io.IOException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:250) [na:1.6.0_22] at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:91) [na:1.6.0_22] ... 27 common frames omitted [ERROR] [http-127.0.0.1-5080-1] org.apache.catalina.realm.JAASRealm - Cannot find message associated with key jaasRealm.unexpectedError java.lang.SecurityException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:93) [na:1.6.0_22] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [na:1.6.0_22] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) [na:1.6.0_22] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) [na:1.6.0_22] at java.lang.reflect.Constructor.newInstance(Constructor.java:513) [na:1.6.0_22] at java.lang.Class.newInstance0(Class.java:355) [na:1.6.0_22] at java.lang.Class.newInstance(Class.java:308) [na:1.6.0_22] at javax.security.auth.login.Configuration$3.run(Configuration.java:247) [na:1.6.0_22] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_22] at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:242) [na:1.6.0_22] at javax.security.auth.login.LoginContext$1.run(LoginContext.java:237) [na:1.6.0_22] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_22] at javax.security.auth.login.LoginContext.init(LoginContext.java:234) [na:1.6.0_22] at javax.security.auth.login.LoginContext.<init>(LoginContext.java:403) [na:1.6.0_22] at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:394) [catalina-6.0.24.jar:na] at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:357) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.DigestAuthenticator.findPrincipal(DigestAuthenticator.java:283) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.DigestAuthenticator.authenticate(DigestAuthenticator.java:176) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:523) [catalina-6.0.24.jar:na] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) [catalina-6.0.24.jar:na] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) [catalina-6.0.24.jar:na] at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) [catalina-6.0.24.jar:na] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) [catalina-6.0.24.jar:na] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) [catalina-6.0.24.jar:na] at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) [tomcat-coyote-6.0.24.jar:na] at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) [tomcat-coyote-6.0.24.jar:na] at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) [tomcat-coyote-6.0.24.jar:na] at java.lang.Thread.run(Thread.java:662) [na:1.6.0_22] Caused by: java.io.IOException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:250) [na:1.6.0_22] at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:91) [na:1.6.0_22] ... 27 common frames omitted In addition, here is the configuration of red5-web.properties webapp.contextPath=/project Even futher information: Seems to me like it is using the right realm: MemoryRealm [INFO] [main] org.red5.server.tomcat.TomcatLoader - Setting connector: org.apache.catalina.connector.Connector [INFO] [main] org.red5.server.tomcat.TomcatLoader - Address to bind: /127.0.0.1:5080 [INFO] [main] org.red5.server.tomcat.TomcatLoader - Setting realm: org.apache.catalina.realm.MemoryRealm [INFO] [main] org.red5.server.tomcat.TomcatLoader - Loading tomcat context [INFO] [main] org.red5.server.tomcat.TomcatLoader - Server root: C:/Program Files/Red5 [INFO] [main] org.red5.server.tomcat.TomcatLoader - Config root: C:/Program Files/Red5/conf [INFO] [main] org.red5.server.tomcat.TomcatLoader - Application root: C:/Program Files/Red5/webapps [INFO] [main] org.red5.server.tomcat.TomcatLoader - Starting Tomcat servlet engine [INFO] [main] org.apache.catalina.startup.Embedded - Starting tomcat server [INFO] [main] org.apache.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/6.0.26 However, immediately after bootstraping Tomcat, I am presented with the following error: Exception in thread "Launcher:/administration" org.springframework.beans.factory.BeanDefinitionStoreException: Could not resolve bean definition resource pattern [/WEB-INF/red5-*.xml]; nested exception is java.io.FileNotFoundException: ServletContext resource [/WEB-INF/] cannot be resolved to URL because it does not exist at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:190) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:458) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:388) at org.red5.server.tomcat.TomcatLoader$1.run(TomcatLoader.java:594) Caused by: java.io.FileNotFoundException: ServletContext resource [/WEB-INF/] cannot be resolved to URL because it does not exist at org.springframework.web.context.support.ServletContextResource.getURL(ServletContextResource.java:132) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.isJarResource(PathMatchingResourcePatternResolver.java:414) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.findPathMatchingResources(PathMatchingResourcePatternResolver.java:343) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.getResources(PathMatchingResourcePatternResolver.java:282) at org.springframework.context.support.AbstractApplicationContext.getResources(AbstractApplicationContext.java:1156) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:177) ... 7 more This error is kinda strange, because after this it seems that /WEB-INF/ is found by the rest of the program by the following output: [INFO] [Launcher:/SOSample] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/installer] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/LiveMedia] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] What really annoys me is that, as you can see in the output, when I try to login, I get a JAASRealm-related exception, but in the debug output when Tomcat is loading, it is clear to me that it expects a MemoryRealm. I was wondering where and how in red5.xml should I specify bean properties such that I force red5 to use MemoryRealm that is under /conf/tomcat-users.xml, because it certainly doesn't do so now. It seems like the biggest question I have posted so far, but I tried to explain it as fully as possible as to avoid confusion.

    Read the article

  • NoClassDefFoundError and Netty

    - by Dmytro Leonenko
    Hi. First to say I'm n00b in Java. I can understand most concepts but in my situation I want somebody to help me. I'm using JBoss Netty to handle simple http request and using MemCachedClient check existence of client ip in memcached. import org.jboss.netty.channel.ChannelHandler; import static org.jboss.netty.handler.codec.http.HttpHeaders.*; import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.*; import static org.jboss.netty.handler.codec.http.HttpResponseStatus.*; import static org.jboss.netty.handler.codec.http.HttpVersion.*; import com.danga.MemCached.*; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.MessageEvent; import org.jboss.netty.channel.SimpleChannelUpstreamHandler; import org.jboss.netty.handler.codec.http.Cookie; import org.jboss.netty.handler.codec.http.CookieDecoder; import org.jboss.netty.handler.codec.http.CookieEncoder; import org.jboss.netty.handler.codec.http.DefaultHttpResponse; import org.jboss.netty.handler.codec.http.HttpChunk; import org.jboss.netty.handler.codec.http.HttpChunkTrailer; import org.jboss.netty.handler.codec.http.HttpRequest; import org.jboss.netty.handler.codec.http.HttpResponse; import org.jboss.netty.handler.codec.http.HttpResponseStatus; import org.jboss.netty.handler.codec.http.QueryStringDecoder; import org.jboss.netty.util.CharsetUtil; /** * @author <a href="http://www.jboss.org/netty/">The Netty Project</a> * @author Andy Taylor ([email protected]) * @author <a href="http://gleamynode.net/">Trustin Lee</a> * * @version $Rev: 2368 $, $Date: 2010-10-18 17:19:03 +0900 (Mon, 18 Oct 2010) $ */ @SuppressWarnings({"ALL"}) public class HttpRequestHandler extends SimpleChannelUpstreamHandler { private HttpRequest request; private boolean readingChunks; /** Buffer that stores the response content */ private final StringBuilder buf = new StringBuilder(); protected MemCachedClient mcc = new MemCachedClient(); private static SockIOPool poolInstance = null; static { // server list and weights String[] servers = { "lcalhost:11211" }; //Integer[] weights = { 3, 3, 2 }; Integer[] weights = {1}; // grab an instance of our connection pool SockIOPool pool = SockIOPool.getInstance(); // set the servers and the weights pool.setServers(servers); pool.setWeights(weights); // set some basic pool settings // 5 initial, 5 min, and 250 max conns // and set the max idle time for a conn // to 6 hours pool.setInitConn(5); pool.setMinConn(5); pool.setMaxConn(250); pool.setMaxIdle(21600000); //1000 * 60 * 60 * 6 // set the sleep for the maint thread // it will wake up every x seconds and // maintain the pool size pool.setMaintSleep(30); // set some TCP settings // disable nagle // set the read timeout to 3 secs // and don't set a connect timeout pool.setNagle(false); pool.setSocketTO(3000); pool.setSocketConnectTO(0); // initialize the connection pool pool.initialize(); // lets set some compression on for the client // compress anything larger than 64k //mcc.setCompressEnable(true); //mcc.setCompressThreshold(64 * 1024); } @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception { HttpRequest request = this.request = (HttpRequest) e.getMessage(); if(mcc.get(request.getHeader("X-Real-Ip")) != null) { HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); response.setHeader("X-Accel-Redirect", request.getUri()); ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); } else { sendError(ctx, NOT_FOUND); } } private void writeResponse(MessageEvent e) { // Decide whether to close the connection or not. boolean keepAlive = isKeepAlive(request); // Build the response object. HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); response.setContent(ChannelBuffers.copiedBuffer(buf.toString(), CharsetUtil.UTF_8)); response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8"); if (keepAlive) { // Add 'Content-Length' header only for a keep-alive connection. response.setHeader(CONTENT_LENGTH, response.getContent().readableBytes()); } // Encode the cookie. String cookieString = request.getHeader(COOKIE); if (cookieString != null) { CookieDecoder cookieDecoder = new CookieDecoder(); Set<Cookie> cookies = cookieDecoder.decode(cookieString); if(!cookies.isEmpty()) { // Reset the cookies if necessary. CookieEncoder cookieEncoder = new CookieEncoder(true); for (Cookie cookie : cookies) { cookieEncoder.addCookie(cookie); } response.addHeader(SET_COOKIE, cookieEncoder.encode()); } } // Write the response. ChannelFuture future = e.getChannel().write(response); // Close the non-keep-alive connection after the write operation is done. if (!keepAlive) { future.addListener(ChannelFutureListener.CLOSE); } } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { e.getCause().printStackTrace(); e.getChannel().close(); } private void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) { HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status); response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8"); response.setContent(ChannelBuffers.copiedBuffer( "Failure: " + status.toString() + "\r\n", CharsetUtil.UTF_8)); // Close the connection as soon as the error message is sent. ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE); } } When I try to send request like http://127.0.0.1:8090/1/2/3 I'm getting java.lang.NoClassDefFoundError: com/danga/MemCached/MemCachedClient at httpClientValidator.server.HttpRequestHandler.<clinit>(HttpRequestHandler.java:66) I believe it's not related to classpath. May be it's related to context in which mcc doesn't exist. Any help appreciated EDIT: Original code http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/http/snoop/package-summary.html I've modified some parts to fit my needs.

    Read the article

  • Autocorrelation returns random results with mic input (using a high pass filter)

    - by Niall
    Hello, Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation. Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050. I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated! double* doHighPassFilter(short *buffer) { // Do FFT: int bufferLength = 1024; float *real = malloc(bufferLength*sizeof(float)); float *real2 = malloc(bufferLength*sizeof(float)); for(int x=0;x<bufferLength;x++) { real[x] = buffer[x]; } fft(real, bufferLength); for(int x=0;x<bufferLength;x+=2) { real2[x] = real[x]; } for (int i=0; i < 30; i++) //Set freqs lower than 30hz to zero to attenuate the low frequencies real2[i] = 0; // Do inverse FFT: inversefft(real2,bufferLength); double* real3 = (double*)real2; return real3; } double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; } // These work by shifting the signal until it seems to correlate with itself. // In other words if the signal looks very similar to (signal shifted 200 data) than the fundamental period is probably 200 data // Note that the algorithm only works well when there's only one prominent fundamental. // This could be optimized by looking at the rate of change to determine a maximum without testing all periods. double detectPitchCalculation(double* data, double minHz, double maxHz, int nCandidates, int nResolution) { //-------------------------1-------------------------// // note that higher frequency means lower period int nLowPeriodInSamples = hzToPeriodInSamples(maxHz, 2048); int nHiPeriodInSamples = hzToPeriodInSamples(minHz, 2048); if (nHiPeriodInSamples <= nLowPeriodInSamples) printf("Bad range for pitch detection."); if (1024 < nHiPeriodInSamples) printf("Not enough data."); double *results = new double[nHiPeriodInSamples - nLowPeriodInSamples]; //-------------------------2-------------------------// for (int period = nLowPeriodInSamples; period < nHiPeriodInSamples; period += nResolution) { double sum = 0; // for each sample, find correlation. (If they are far apart, small) for (int i = 0; i < 1024 - period; i++) sum += data[i] * data[i + period]; double mean = sum / 1024.0; results[period - nLowPeriodInSamples] = mean; } //-------------------------3-------------------------// // find the best indices int *bestIndices = findBestCandidates(nCandidates, results, nHiPeriodInSamples - nLowPeriodInSamples - 1); //note findBestCandidates modifies parameter // convert back to Hz double *res = new double[nCandidates]; for (int i=0; i < nCandidates;i++) res[i] = periodInSamplesToHz(bestIndices[i]+nLowPeriodInSamples, 2048); double pitch2 = res[0]; free(res); free(results); return pitch2; } /// Finds n "best" values from an array. Returns the indices of the best parts. /// (One way to do this would be to sort the array, but that could take too long. /// Warning: Changes the contents of the array!!! Do not use result array afterwards. int* findBestCandidates(int n, double* inputs,int length) { //int length = inputs.Length; if (length < n) printf("Length of inputs is not long enough."); int *res = new int[n]; double minValue = 0; for (int c = 0; c < n; c++) { // find the highest. double fBestValue = minValue; int nBestIndex = -1; for (int i = 0; i < length; i++) { if (inputs[i] > fBestValue) { nBestIndex = i; fBestValue = inputs[i]; } } // record this highest value res[c] = nBestIndex; // now blank out that index. if(nBestIndex!=-1) inputs[nBestIndex] = minValue; } return res; } int hzToPeriodInSamples(double hz, int sampleRate) { return (int)(1 / (hz / (double)sampleRate)); } double periodInSamplesToHz(int period, int sampleRate) { return 1 / (period / (double)sampleRate); } Thanks, Niall. Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.

    Read the article

  • Failure with LogonUser in MC++

    - by Alikar
    After fighting with this for a week I have not really gotten anywhere in why it constantly fails in my code, but not in other examples. My code, which while it compiles, will not log into a user that I know has the correct login information. Where it fails is the following line: wi = gcnew WindowsIdentity(token); It fails here because the token is zero, meaning that it was never set to a user token. Here is my full code: #ifndef UNCAPI_H #define UNCAPI_H #include <windows.h> #pragma once using namespace System; using namespace System::Runtime::InteropServices; using namespace System::Security::Principal; using namespace System::Security::Permissions; namespace UNCAPI { public ref class UNCAccess { public: //bool Logon(String ^_srUsername, String ^_srDomain, String ^_srPassword); [PermissionSetAttribute(SecurityAction::Demand, Name = "FullTrust")] bool Logon(String ^_srUsername, String ^_srDomain, String ^_srPassword) { bool bSuccess = false; token = IntPtr(0); bSuccess = LogonUser(_srUsername, _srDomain, _srPassword, 8, 0, &tokenHandle); if(bSuccess) { wi = gcnew WindowsIdentity(token); wic = wi->Impersonate(); } return bSuccess; } void UNCAccess::Logoff() { if (wic != nullptr ) { wic->Undo(); } CloseHandle((int*)token.ToPointer()); } private: [DllImport("advapi32.dll", SetLastError=true)]//[DllImport("advapi32.DLL", EntryPoint="LogonUserW", SetLastError=true, CharSet=CharSet::Unicode, ExactSpelling=true, CallingConvention=CallingConvention::StdCall)] bool static LogonUser(String ^lpszUsername, String ^lpszDomain, String ^lpszPassword, int dwLogonType, int dwLogonProvider, IntPtr *phToken); [DllImport("KERNEL32.DLL", EntryPoint="CloseHandle", SetLastError=true, CharSet=CharSet::Unicode, ExactSpelling=true, CallingConvention=CallingConvention::StdCall)] bool static CloseHandle(int *handle); IntPtr token; WindowsIdentity ^wi; WindowsImpersonationContext ^wic; };// End of Class UNCAccess }// End of Name Space #endif UNCAPI_H Now using this slightly modified example from Microsoft I was able to get a login and a token: #using <mscorlib.dll> #using <System.dll> using namespace System; using namespace System::Runtime::InteropServices; using namespace System::Security::Principal; using namespace System::Security::Permissions; [assembly:SecurityPermissionAttribute(SecurityAction::RequestMinimum, UnmanagedCode=true)] [assembly:PermissionSetAttribute(SecurityAction::RequestMinimum, Name = "FullTrust")]; [DllImport("advapi32.dll", SetLastError=true)] bool LogonUser(String^ lpszUsername, String^ lpszDomain, String^ lpszPassword, int dwLogonType, int dwLogonProvider, IntPtr* phToken); [DllImport("kernel32.dll", CharSet=System::Runtime::InteropServices::CharSet::Auto)] int FormatMessage(int dwFlags, IntPtr* lpSource, int dwMessageId, int dwLanguageId, String^ lpBuffer, int nSize, IntPtr *Arguments); [DllImport("kernel32.dll", CharSet=CharSet::Auto)] bool CloseHandle(IntPtr handle); [DllImport("advapi32.dll", CharSet=CharSet::Auto, SetLastError=true)] bool DuplicateToken(IntPtr ExistingTokenHandle, int SECURITY_IMPERSONATION_LEVEL, IntPtr* DuplicateTokenHandle); // GetErrorMessage formats and returns an error message // corresponding to the input errorCode. String^ GetErrorMessage(int errorCode) { int FORMAT_MESSAGE_ALLOCATE_BUFFER = 0x00000100; int FORMAT_MESSAGE_IGNORE_INSERTS = 0x00000200; int FORMAT_MESSAGE_FROM_SYSTEM = 0x00001000; //int errorCode = 0x5; //ERROR_ACCESS_DENIED //throw new System.ComponentModel.Win32Exception(errorCode); int messageSize = 255; String^ lpMsgBuf = ""; int dwFlags = FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS; IntPtr ptrlpSource = IntPtr::Zero; IntPtr prtArguments = IntPtr::Zero; int retVal = FormatMessage(dwFlags, &ptrlpSource, errorCode, 0, lpMsgBuf, messageSize, &prtArguments); if (0 == retVal) { throw gcnew Exception(String::Format( "Failed to format message for error code {0}. ", errorCode)); } return lpMsgBuf; } // Test harness. // If you incorporate this code into a DLL, be sure to demand FullTrust. [PermissionSetAttribute(SecurityAction::Demand, Name = "FullTrust")] int main() { IntPtr tokenHandle = IntPtr(0); IntPtr dupeTokenHandle = IntPtr(0); try { String^ userName; String^ domainName; // Get the user token for the specified user, domain, and password using the // unmanaged LogonUser method. // The local machine name can be used for the domain name to impersonate a user on this machine. Console::Write("Enter the name of the domain on which to log on: "); domainName = Console::ReadLine(); Console::Write("Enter the login of a user on {0} that you wish to impersonate: ", domainName); userName = Console::ReadLine(); Console::Write("Enter the password for {0}: ", userName); const int LOGON32_PROVIDER_DEFAULT = 0; //This parameter causes LogonUser to create a primary token. const int LOGON32_LOGON_INTERACTIVE = 2; const int SecurityImpersonation = 2; tokenHandle = IntPtr::Zero; dupeTokenHandle = IntPtr::Zero; // Call LogonUser to obtain a handle to an access token. bool returnValue = LogonUser(userName, domainName, Console::ReadLine(), LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, &tokenHandle); Console::WriteLine("LogonUser called."); if (false == returnValue) { int ret = Marshal::GetLastWin32Error(); Console::WriteLine("LogonUser failed with error code : {0}", ret); Console::WriteLine("\nError: [{0}] {1}\n", ret, GetErrorMessage(ret)); int errorCode = 0x5; //ERROR_ACCESS_DENIED throw gcnew System::ComponentModel::Win32Exception(errorCode); } Console::WriteLine("Did LogonUser Succeed? {0}", (returnValue?"Yes":"No")); Console::WriteLine("Value of Windows NT token: {0}", tokenHandle); // Check the identity. Console::WriteLine("Before impersonation: {0}", WindowsIdentity::GetCurrent()->Name); bool retVal = DuplicateToken(tokenHandle, SecurityImpersonation, &dupeTokenHandle); if (false == retVal) { CloseHandle(tokenHandle); Console::WriteLine("Exception thrown in trying to duplicate token."); return -1; } // The token that is passed to the following constructor must // be a primary token in order to use it for impersonation. WindowsIdentity^ newId = gcnew WindowsIdentity(dupeTokenHandle); WindowsImpersonationContext^ impersonatedUser = newId->Impersonate(); // Check the identity. Console::WriteLine("After impersonation: {0}", WindowsIdentity::GetCurrent()->Name); // Stop impersonating the user. impersonatedUser->Undo(); // Check the identity. Console::WriteLine("After Undo: {0}", WindowsIdentity::GetCurrent()->Name); // Free the tokens. if (tokenHandle != IntPtr::Zero) CloseHandle(tokenHandle); if (dupeTokenHandle != IntPtr::Zero) CloseHandle(dupeTokenHandle); } catch(Exception^ ex) { Console::WriteLine("Exception occurred. {0}", ex->Message); } Console::ReadLine(); }// end of function Why should Microsoft's code succeed, where mine fails?

    Read the article

  • IOException: Unable To Delete Images Due To File Lock

    - by Arslan Pervaiz
    I am Unable To Delete Image File From My Server Path It Gaves Error That The Process Cannot Access The File "FileName" Because it is being Used By Another Process. I Tried Many Methods But Still All In Vain. Please Help me Out in This Issue. Here is My Code Snippet. using System; using System.Data; using System.Web; using System.Data.SqlClient; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Globalization; using System.Web.Security; using System.Text; using System.DirectoryServices; using System.Collections; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Drawing.Drawing2D; //============ Main Block ================= byte[] data = (byte[])ds.Tables[0].Rows[0][0]; MemoryStream ms = new MemoryStream(data); Image returnImage = Image.FromStream(ms); returnImage.Save(Server.MapPath(".\\TmpImages\\SavedImage.jpg"), System.Drawing.Imaging.ImageFormat.Jpeg); returnImage.Dispose(); \\ I Tried this Dispose Method To Unlock The File But Nothing Done. ms.Close(); \\ I Tried The Memory Stream Close Method Also But Its Also Not Worked For Me. watermark(); \\ Here is My Water Mark Method That Print Water Mark Image on My Saved Image (Image That is Converted From Byte Array) DeleteImages(); \\ Here is My Delete Method That I Call To Delete The Images //===== ==== My Delete Method To Delete Files================== public void DeleteImages() { try { File.Delete(Server.MapPath(".\\TmpImages\\WaterMark.jpg")); \\This Image Deleted Fine. File.Delete(Server.MapPath(".\\TmpImages\\SavedImage.jpg")); \\ Exception Thrown On Deleting of This Image. } catch (Exception ex) { LogManager.LogException(ex, "Error in Deleting Images."); Master.ShowMessage(ex.Message, true); } } \ ==== Method Declartion That Make Watermark of One Image On Another Image.======= public void watermark() { //create a image object containing the photograph to watermark Image imgPhoto = Image.FromFile(Server.MapPath(".\\TmpImages\\SavedImage.jpg")); int phWidth = imgPhoto.Width; int phHeight = imgPhoto.Height; //create a Bitmap the Size of the original photograph Bitmap bmPhoto = new Bitmap(phWidth, phHeight, PixelFormat.Format24bppRgb); bmPhoto.SetResolution(imgPhoto.HorizontalResolution, imgPhoto.VerticalResolution); //load the Bitmap into a Graphics object Graphics grPhoto = Graphics.FromImage(bmPhoto); //create a image object containing the watermark Image imgWatermark = new Bitmap(Server.MapPath(".\\TmpImages\\PrintasWatermark.jpg")); int wmWidth = imgWatermark.Width; int wmHeight = imgWatermark.Height; //Set the rendering quality for this Graphics object grPhoto.SmoothingMode = SmoothingMode.AntiAlias; //Draws the photo Image object at original size to the graphics object. grPhoto.DrawImage( imgPhoto, // Photo Image object new Rectangle(0, 0, phWidth, phHeight), // Rectangle structure 0, // x-coordinate of the portion of the source image to draw. 0, // y-coordinate of the portion of the source image to draw. phWidth, // Width of the portion of the source image to draw. phHeight, // Height of the portion of the source image to draw. GraphicsUnit.Pixel); // Units of measure //------------------------------------------------------- //to maximize the size of the Copyright message we will //test multiple Font sizes to determine the largest posible //font we can use for the width of the Photograph //define an array of point sizes you would like to consider as possiblities //------------------------------------------------------- //Define the text layout by setting the text alignment to centered StringFormat StrFormat = new StringFormat(); StrFormat.Alignment = StringAlignment.Center; //define a Brush which is semi trasparent black (Alpha set to 153) SolidBrush semiTransBrush2 = new SolidBrush(Color.FromArgb(153, 0, 0, 0)); //define a Brush which is semi trasparent white (Alpha set to 153) SolidBrush semiTransBrush = new SolidBrush(Color.FromArgb(153, 255, 255, 255)); //------------------------------------------------------------ //Step #2 - Insert Watermark image //------------------------------------------------------------ //Create a Bitmap based on the previously modified photograph Bitmap Bitmap bmWatermark = new Bitmap(bmPhoto); bmWatermark.SetResolution(imgPhoto.HorizontalResolution, imgPhoto.VerticalResolution); //Load this Bitmap into a new Graphic Object Graphics grWatermark = Graphics.FromImage(bmWatermark); //To achieve a transulcent watermark we will apply (2) color //manipulations by defineing a ImageAttributes object and //seting (2) of its properties. ImageAttributes imageAttributes = new ImageAttributes(); //The first step in manipulating the watermark image is to replace //the background color with one that is trasparent (Alpha=0, R=0, G=0, B=0) //to do this we will use a Colormap and use this to define a RemapTable ColorMap colorMap = new ColorMap(); //My watermark was defined with a background of 100% Green this will //be the color we search for and replace with transparency colorMap.OldColor = Color.FromArgb(255, 0, 255, 0); colorMap.NewColor = Color.FromArgb(0, 0, 0, 0); ColorMap[] remapTable = { colorMap }; imageAttributes.SetRemapTable(remapTable, ColorAdjustType.Bitmap); //The second color manipulation is used to change the opacity of the //watermark. This is done by applying a 5x5 matrix that contains the //coordinates for the RGBA space. By setting the 3rd row and 3rd column //to 0.3f we achive a level of opacity float[][] colorMatrixElements = { new float[] {1.0f, 0.0f, 0.0f, 0.0f, 0.0f}, new float[] {0.0f, 1.0f, 0.0f, 0.0f, 0.0f}, new float[] {0.0f, 0.0f, 1.0f, 0.0f, 0.0f}, new float[] {0.0f, 0.0f, 0.0f, 0.3f, 0.0f}, new float[] {0.0f, 0.0f, 0.0f, 0.0f, 1.0f}}; ColorMatrix wmColorMatrix = new ColorMatrix(colorMatrixElements); imageAttributes.SetColorMatrix(wmColorMatrix, ColorMatrixFlag.Default, ColorAdjustType.Bitmap); //For this example we will place the watermark in the upper right //hand corner of the photograph. offset down 10 pixels and to the //left 10 pixles int xPosOfWm = ((phWidth - wmWidth) - 10); int yPosOfWm = 10; grWatermark.DrawImage(imgWatermark, new Rectangle(xPosOfWm, yPosOfWm, wmWidth, wmHeight), //Set the detination Position 0, // x-coordinate of the portion of the source image to draw. 0, // y-coordinate of the portion of the source image to draw. wmWidth, // Watermark Width wmHeight, // Watermark Height GraphicsUnit.Pixel, // Unit of measurment imageAttributes); //ImageAttributes Object //Replace the original photgraphs bitmap with the new Bitmap imgPhoto = bmWatermark; grPhoto.Dispose(); grWatermark.Dispose(); //save new image to file system. imgPhoto.Save(Server.MapPath(".\\TmpImages\\WaterMark.jpg"), ImageFormat.Jpeg); imgPhoto.Dispose(); imgWatermark.Dispose(); }

    Read the article

  • MVC 3 Client Validation works intermittently

    - by Gutek
    I have MVC 3 version, System.Web.Mvc product version is: 3.0.20105.0 modified on 5th of Jan 2011 - i think that's the latest. I’ve notice that client validation is not working as it suppose in the application that we are creating, so I’ve made a quick test. I’ve created basic MVC 3 Application using Internet Application template. I’ve added Test Controller: using System.Web.Mvc; using MvcApplication3.Models; namespace MvcApplication3.Controllers { public class TestController : Controller { public ActionResult Index() { return View(); } public ActionResult Create() { Sample model = new Sample(); return View(model); } [HttpPost] public ActionResult Create(Sample model) { if(!ModelState.IsValid) { return View(); } return RedirectToAction("Display"); } public ActionResult Display() { Sample model = new Sample(); model.Age = 10; model.CardNumber = "1324234"; model.Email = "[email protected]"; model.Title = "hahah"; return View(model); } } } Model: using System.ComponentModel.DataAnnotations; namespace MvcApplication3.Models { public class Sample { [Required] public string Title { get; set; } [Required] public string Email { get; set; } [Required] [Range(4, 120, ErrorMessage = "Oi! Common!")] public short Age { get; set; } [Required] public string CardNumber { get; set; } } } And 3 views: Create: @model MvcApplication3.Models.Sample @{ ViewBag.Title = "Create"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Create</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @*@{ Html.EnableClientValidation(); }*@ @using (Html.BeginForm()) { @Html.ValidationSummary(false) <fieldset> <legend>Sample</legend> <div class="editor-label"> @Html.LabelFor(model => model.Title) </div> <div class="editor-field"> @Html.EditorFor(model => model.Title) @Html.ValidationMessageFor(model => model.Title) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Age) </div> <div class="editor-field"> @Html.EditorFor(model => model.Age) @Html.ValidationMessageFor(model => model.Age) </div> <div class="editor-label"> @Html.LabelFor(model => model.CardNumber) </div> <div class="editor-field"> @Html.EditorFor(model => model.CardNumber) @Html.ValidationMessageFor(model => model.CardNumber) </div> <p> <input type="submit" value="Create" /> </p> </fieldset> @*<fieldset> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> *@ } <div> @Html.ActionLink("Back to List", "Index") </div> Display: @model MvcApplication3.Models.Sample @{ ViewBag.Title = "Display"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Display</h2> <fieldset> <legend>Sample</legend> <div class="display-label">Title</div> <div class="display-field">@Model.Title</div> <div class="display-label">Email</div> <div class="display-field">@Model.Email</div> <div class="display-label">Age</div> <div class="display-field">@Model.Age</div> <div class="display-label">CardNumber</div> <div class="display-field">@Model.CardNumber</div> </fieldset> <p> @Html.ActionLink("Back to List", "Index") </p> Index: @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <p> @Html.ActionLink("Create", "Create") </p> <p> @Html.ActionLink("Display", "Display") </p> Everything is default here – Create Controller, AddView from controller action with model specified with proper scaffold template and using provided layout in sample application. When I will go to /Test/Create client validation in most cases works only for Title and Age fields, after clicking Create it works for all fields (create does not goes to server). However in some cases (after a build) Title validation is not working and Email is, or CardNumber or Title and CardNumber but Email is not. But never all validation is working before clicking Create. I’ve tried creating form with Html.EditorForModel as well as enforce client validation just before BeginForm: @{ Html.EnableClientValidation(); } I’m providing a source code for this sample on dropbox – as maybe our dev env is broken :/ I’ve done tests on IE 8 and Chrome 10 beta. Just in case, in web config validation scripts are enabled: <appSettings> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/> </appSettings> So my questions are Is there a way to ensure that Client validation will work as it supposed to work and not intermittently? Is this a desired behavior and I'm missing something in configuration/implementation?

    Read the article

  • Warning: Cannot modify header information - headers already sent

    - by Pankaj Khurana
    Hi, I am using code available on http://www.forosdelweb.com/f18/zip-lib-php-archivo-zip-vacio-431133/ for creating zip file. First file-zip.lib.php <?php /* $Id: zip.lib.php,v 1.1 2004/02/14 15:21:18 anoncvs_tusedb Exp $ */ // vim: expandtab sw=4 ts=4 sts=4: /** * Zip file creation class. * Makes zip files. * * Last Modification and Extension By : * * Hasin Hayder * HomePage : www.hasinme.info * Email : [email protected] * IDE : PHP Designer 2005 * * * Originally Based on : * * http://www.zend.com/codex.php?id=535&single=1 * By Eric Mueller <[email protected]> * * http://www.zend.com/codex.php?id=470&single=1 * by Denis125 <[email protected]> * * a patch from Peter Listiak <[email protected]> for last modified * date and time of the compressed file * * Official ZIP file format: http://www.pkware.com/appnote.txt * * @access public */ class zipfile { /** * Array to store compressed data * * @var array $datasec */ var $datasec = array(); /** * Central directory * * @var array $ctrl_dir */ var $ctrl_dir = array(); /** * End of central directory record * * @var string $eof_ctrl_dir */ var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; /** * Last offset position * * @var integer $old_offset */ var $old_offset = 0; /** * Converts an Unix timestamp to a four byte DOS date and time format (date * in high two bytes, time in low two bytes allowing magnitude comparison). * * @param integer the current Unix timestamp * * @return integer the current date in a four byte DOS format * * @access private */ function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method /** * Adds "file" to archive * * @param string file contents * @param string name of the file in the archive (may contains the path) * @param integer the current timestamp * * @access public */ function addFile($data, $name, $time = 0) { $name = str_replace('', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = 'x' . $dtime[6] . $dtime[7] . 'x' . $dtime[4] . $dtime[5] . 'x' . $dtime[2] . $dtime[3] . 'x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method /** * Dumps out file * * @return string the zipped file * * @access public */ function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method /** * A Wrapper of original addFile Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param array An Array of files with relative/absolute path to be added in Zip File * * @access public */ function addFiles($files /*Only Pass Array*/) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } /** * A Wrapper of original file Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param string Output file name * * @access public */ function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> My second file newzip.php <? include("zip.lib.php"); $ziper = new zipfile(); $ziper->addFiles(array("index.htm")); //array of files // the next three lines force an immediate download of the zip file: header("Content-type: application/octet-stream"); header("Content-disposition: attachment; filename=test.zip"); echo $ziper -> file(); ?> I am getting this warning while executing newzip.php Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\demo\zip.lib.php:233) in E:\xampp\htdocs\demo\newzip.php on line 6 I am unable to figure out the reason for the same. Please help me on this. Thanks

    Read the article

  • What arguments do I send a function being called by a button in python?

    - by Jared
    I have a UI, in that UI is 4 text fields and 1 int field, then I have a function that calls to another function based on what's inside of the text fields, this function has (self, *args). My function that is being called to takes five arguments and I don't know what to put in it to make it actually work with my UI because python button's send an argument of their own. I have tried self and *args, but it doesn't work. Here is my code, didn't include most of the UI code since it is self explanatory: def crBC(self, IKJoint, FKJoint, bindJoint, xQuan, switch): ''' You should have a controller with an attribute 'ikFkBlend' - The name can be changed after the script executes. Controller should contain an enum - FK/DYN(0), IK(1). Specify the IK joint, then either the dynamic or FK joint, then the bind joint. Then a quantity of joints to pass through and connect. Tested currently on 600 joints (200 x 3), executed in less than a second. Returns nothing. Please open your script editor for details. ''' import itertools # gets children joints of the selected joint chHipIK = cmds.listRelatives(IKJoint, ad = True, type = 'joint') chHipFK = cmds.listRelatives(FKJoint, ad = True, type = 'joint') chHipBind = cmds.listRelatives(bindJoint, ad = True, type = 'joint') # list is built backwards, this reverses the list chHipIK.reverse() chHipFK.reverse() chHipBind.reverse() # appends the initial joint to the list chHipIK.append(IKJoint) chHipFK.append(FKJoint) chHipBind.append(bindJoint) # puts the last joint at the start of the list because the initial joint # was added to the end chHipIK.insert(0, chHipIK.pop()) chHipFK.insert(0, chHipFK.pop()) chHipBind.insert(0, chHipBind.pop()) # pops off the remaining joints in the list the user does not wish to be blended chHipBind[xQuan:] = [] chHipIK[xQuan:] = [] chHipFK[xQuan:] = [] # goes through the bind joints, makes a blend colors for each one, connects # the switch to the blender for a, b, c in itertools.izip(chHipBind, chHipIK, chHipFK): rotBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'rotate_BC') tranBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'tran_BC') scaleBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'scale_BC') cmds.connectAttr(switch + '.ikFkSwitch', rotBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', tranBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', scaleBC + '.blender') # goes through the ik joints, connects to the blend colors cmds.connectAttr(b + '.rotate', rotBC + '.color1', force = True) cmds.connectAttr(b + '.translate', tranBC + '.color1', force = True) cmds.connectAttr(b + '.scale', scaleBC + '.color1', force = True) # connects FK joints to the blend colors cmds.connectAttr(c + '.rotate', rotBC + '.color2') cmds.connectAttr(c + '.translate', tranBC + '.color2') cmds.connectAttr(c + '.scale', scaleBC + '.color2') # connects blend colors to bind joints cmds.connectAttr(rotBC + '.output', a + '.rotate') cmds.connectAttr(tranBC + '.output', a + '.translate') cmds.connectAttr(scaleBC + '.output', a + '.scale') ------------------- def execCrBC(self, *args): g.crBC(cmds.textField(self.ikJBC, q = True, tx = True), cmds.textField(self.fkJBC, q = True, tx = True), cmds.textField(self.bindJBC, q = True, tx = True), cmds.intField(self.bQBC, q = True, v = True), cmds.textField(self.sCBC, q = True, tx = True)) ------------------- self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) Here is the code causing the problem as requested: import maya.cmds as cmds import jtRigUI.createDummyRig as dum import jtRigUI.createSkeleton as sk import jtRigUI.generalUtilities as gu import jtRigUI.createLegRig as lr import jtRigUI.createArmRig as ar class RUI(dum.Dict, dum.Dummy, sk.Skel, sk.FiSkel, lr.LeanLocs, lr.LegRig, ar.ArmRig, gu.Gutils): def __init__(self, charNameUI, gScaleUI, fingButtonGrp, thumbCheckBox, spineButtonGrp, neckButtonGrp, ikJBC, fkJBC, bindJBC, bQBC, sCBC): rigUI = 'rigUI' if cmds.window(rigUI, exists = True): cmds.deleteUI(rigUI) rigUI = cmds.window(rigUI, t = 'JT Rigging UI', sizeable = False, tb = True, mnb = False, mxb = False, menuBar = True, tlb = True, nm = 5) form = cmds.formLayout() tabs = cmds.tabLayout(innerMarginWidth = 1, innerMarginHeight = 1) rigUIMenu = cmds.menu('Help', hm = True) aboutMenu = cmds.menuItem('about') cmds.popupMenu('about', button = 1) deleteUIMenu = cmds.menu('Delete', hm = True) cmds.menuItem('dummySkeleton') cmds.formLayout(form, edit = True, attachForm = ((tabs, 'top', 0), (tabs, 'left', 0), (tabs, 'bottom', 0), (tabs, 'right', 0)), w = 30) tab1 = cmds.rowColumnLayout('Dummy') #cmds.columnLayout(rowSpacing = 10) #cmds.setParent('..') cmds.frameLayout(l = 'A: Dummy Skeleton Setup', w = 400) self.charNameUI = cmds.textFieldGrp (label="Optional Character Name:", ann="Insert a name for the character or leave empty.", tx = '', w = 1) fingJUI = cmds.frameLayout(l = 'B: Number of Fingers', w = 10) cmds.text('\n', h = 5) self.fingButtonGrp = cmds.radioButtonGrp('fingRadio', p = fingJUI, l = 'Fingers: ', sl = 4, w = 1, numberOfRadioButtons = 4, labelArray4 = ['One', 'Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw5 = [60,60,60,60,60]) self.thumbCheckBox = cmds.checkBoxGrp(l = 'Thumb: ', v1 = True) cmds.text('\n', h = 5) spineJUI = cmds.frameLayout(l = 'C: Number of Spine Joints') cmds.text('\n', h = 5) self.spineButtonGrp = cmds.radioButtonGrp('spineRadio', p = spineJUI, l = 'Spine Joints: ', sl = 2, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Three', 'Five', 'Ten'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) neckJUI = cmds.frameLayout(l = 'D: Number of Neck Joints') cmds.text('\n', h = 5) self.neckButtonGrp = cmds.radioButtonGrp('neckRadio', p = neckJUI, l = 'Neck Joints: ', sl = 0, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.frameLayout('E: Creation') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nCreate Dummy Skeleton\n', c = self.build) # also have it make char name field grey cmds.text('Elbows and Knees must have bend.', bgc = (0.2,0.2,0.2)) cmds.columnLayout() cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab2 = cmds.rowColumnLayout('Skeleton') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Skeleton Setup') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nConvert to Skeleton - Orient - Set LRA\n', c = self.buildSkel) self.gScaleUI = cmds.textFieldGrp (label="Scale Multiplier:", ann="Scale multipler of Character: basis for all further base controllers", tx = '1.0', w = 1, ed = False, en = False, visible = True) cmds.frameLayout('B: Manual Orientation') cmds.text('You must manually check finger, thumb, leg, foot orientation specifically.\nConfirm rest of joints.\nSpine: X aim, Y point backwards from spine, Z to the side.\nFingers: X is aim, Y points upwards, Z to the side - Spread on Y, curl on Z.\nFoot: Pivots on Y, rolls on Z, leans on X.') cmds.columnLayout() cmds.setParent('..') cmds.frameLayout('C: Finalize Creation of Skeleton') cmds.button(l = '\nFinalize Skeleton\n', c = self.finishS) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab3 = cmds.rowColumnLayout('Legs') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Leg Rig Setup') cmds.button(l = '\nGenerate Foot Lean Locators\n', c = self.makeLean) cmds.text('Place on either side of the foot.\nDo not rotate: Automatic orientation in place.') cmds.frameLayout(l = 'B: Rig Legs') cmds.button(l = '\nRig Legs\n', c = self.makeLegs) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab4 = cmds.rowColumnLayout('Arms') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Arm Rig Setup') cmds.button(l = '\nA: Rig Arms\n', c = self.makeArms) cmds.setParent('..') cmds.setParent('..') tab5 = cmds.rowColumnLayout('Spine and Head') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Spine Rig Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Stretchy IK') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Stretchy Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Extras') cmds.scrollLayout(saw = 600, sah = 600, cr = True) cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'General Utitlities') cmds.text('\nHere are all my general utilities for various things') cmds.frameLayout(l = 'Automatic Blend Colors Creation and Connection') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('IK Joint:') cmds.text(l = '') cmds.text('FK/Dyn Joint:') cmds.text(l = '') cmds.text('Bind Joint:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nBlend Quantity:') cmds.text(l = '') cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.text(l = '') cmds.setParent('..') cmds.frameLayout(l = 'Make Spline IK Curve Stretch And Squash') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('Curve Name:') cmds.text(l = '') cmds.text('Setup Name:') cmds.text(l = '') cmds.text('Joint Quantity:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(' \nGlobal Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.setParent('..') cmds.showWindow(rigUI) r = RUI('charNameUI', 'gScaleUI', 'fingButtonGrp', 'thumbCheckBox', 'spineButtonGrp', 'neckButtonGrp', 'ikJBC', 'fkJBC', 'bindJBC', 'bQBC', 'sCBC') # last modified at 6.20 pm 29th June 2011

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • Get the current location of the Gps? Showing the default one

    - by Gagandeep
    Need help Urgent!!!!! Did changes with help but still unsuccessful... I have to request location updates, but I am unsuccessful in implementing that... i modified the code but need help so that i can see the current location. PLEASE look through my code and help please.. I am learning this and new to this concept and android.. any help would be appreciated here is my code: package com.GoogleMaps; import java.util.List; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapController; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Paint; import android.graphics.Point; import android.graphics.drawable.Drawable; import android.location.Location; import android.location.LocationListener; import android.location.LocationManager; import android.os.Bundle; import android.widget.Toast; public class MapsActivity extends MapActivity { /** Called when the activity is first created. */ private MapView mapView; private LocationManager lm; private LocationListener ll; private MapController mc; GeoPoint p = null; Drawable defaultMarker = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mapView = (MapView)findViewById(R.id.mapview); //show zoom in/out buttons mapView.setBuiltInZoomControls(true); //Standard view of the map(map/sat) mapView.setSatellite(false); // get zoom tool mapView.setBuiltInZoomControls(true); //get controller of the map for zooming in/out mc = mapView.getController(); // Zoom Level mc.setZoom(18); lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); ll = new MyLocationListener(); lm.requestLocationUpdates( LocationManager.GPS_PROVIDER, 0, 0, ll); //Get the current location in start-up lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); ll = new MyLocationListener(); lm.requestLocationUpdates( LocationManager.GPS_PROVIDER, 0, 0, ll); //Get the current location in start-up if (lm.getLastKnownLocation(LocationManager.GPS_PROVIDER) != null){ GeoPoint p = new GeoPoint( (int)(lm.getLastKnownLocation(LocationManager.GPS_PROVIDER).getLatitude()*1000000), (int)(lm.getLastKnownLocation(LocationManager.GPS_PROVIDER).getLongitude()*1000000)); mc.animateTo(p); } MyLocationOverlay myLocationOverlay = new MyLocationOverlay(); List<Overlay> list = mapView.getOverlays(); list.add(myLocationOverlay); } protected class MyLocationOverlay extends com.google.android.maps.Overlay { @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Paint paint = new Paint(); super.draw(canvas, mapView, shadow); GeoPoint p = null; // Converts lat/lng-Point to OUR coordinates on the screen. Point myScreenCoords = new Point(); mapView.getProjection().toPixels(p, myScreenCoords); paint.setStrokeWidth(1); paint.setARGB(255, 255, 255, 255); paint.setStyle(Paint.Style.STROKE); Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher); canvas.drawBitmap(bmp, myScreenCoords.x, myScreenCoords.y, paint); canvas.drawText("I am here...", myScreenCoords.x, myScreenCoords.y, paint); return true; } } private class MyLocationListener implements LocationListener{ public void onLocationChanged(Location argLocation) { // TODO Auto-generated method stub p = new GeoPoint((int)(argLocation.getLatitude()*1000000), (int)(argLocation.getLongitude()*1000000)); Toast.makeText(getBaseContext(), "New location latitude [" +argLocation.getLatitude() + "] longitude [" + argLocation.getLongitude()+"]", Toast.LENGTH_SHORT).show(); mc.animateTo(p); mapView.invalidate(); // call this so UI of map was updated } public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } } protected boolean isRouteDisplayed() { return false; } } catlog: 11-29 17:40:42.699: D/dalvikvm(371): GC_FOR_MALLOC freed 6074 objects / 369952 bytes in 74ms 11-29 17:40:42.970: I/MapActivity(371): Handling network change notification:CONNECTED 11-29 17:40:42.980: E/MapActivity(371): Couldn't get connection factory client 11-29 17:40:43.190: D/AndroidRuntime(371): Shutting down VM 11-29 17:40:43.190: W/dalvikvm(371): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 11-29 17:40:43.280: E/AndroidRuntime(371): FATAL EXCEPTION: main 11-29 17:40:43.280: E/AndroidRuntime(371): java.lang.NullPointerException 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.PixelConverter.toPixels(PixelConverter.java:71) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.PixelConverter.toPixels(PixelConverter.java:61) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.GoogleMaps.MapsActivity$MyLocationOverlay.draw(MapsActivity.java:106) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.OverlayBundle.draw(OverlayBundle.java:42) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.MapView.onDraw(MapView.java:494) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.View.draw(View.java:6740) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1640) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1638) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.View.draw(View.java:6743) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.widget.FrameLayout.draw(FrameLayout.java:352) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1640) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1638) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.View.draw(View.java:6743) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.widget.FrameLayout.draw(FrameLayout.java:352) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.android.internal.policy.impl.PhoneWindow$DecorView.draw(PhoneWindow.java:1842) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewRoot.draw(ViewRoot.java:1407) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewRoot.performTraversals(ViewRoot.java:1163) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewRoot.handleMessage(ViewRoot.java:1727) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.os.Handler.dispatchMessage(Handler.java:99) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.os.Looper.loop(Looper.java:123) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.app.ActivityThread.main(ActivityThread.java:4627) 11-29 17:40:43.280: E/AndroidRuntime(371): at java.lang.reflect.Method.invokeNative(Native Method) 11-29 17:40:43.280: E/AndroidRuntime(371): at java.lang.reflect.Method.invoke(Method.java:521) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 11-29 17:40:43.280: E/AndroidRuntime(371): at dalvik.system.NativeStart.main(Native Method) 11-29 17:40:45.779: D/dalvikvm(371): GC_FOR_MALLOC freed 5970 objects / 506624 bytes in 1179ms 11-29 17:40:45.779: I/dalvikvm-heap(371): Grow heap (frag case) to 3.147MB for 17858-byte allocation 11-29 17:40:45.870: D/dalvikvm(371): GC_FOR_MALLOC freed 56 objects / 2304 bytes in 92ms 11-29 17:40:45.960: D/dalvikvm(371): GC_EXPLICIT freed 3459 objects / 196432 bytes in 74ms 11-29 17:40:48.310: D/dalvikvm(371): GC_EXPLICIT freed 116 objects / 41448 bytes in 68ms 11-29 17:40:49.540: I/Process(371): Sending signal. PID: 371 SIG: 9

    Read the article

  • Where to find xmoov port to C#? (to make Http Pseudo Streaming from c# app)

    - by Ole Jak
    So I found this beautifull script for FLV video format Http Pseudo Streaming but in is in PHP ( found on http://stream.xmoov.com/ ) So does any one know opensource translations or can translate such PHP code into C#? <?php /* xmoov-php 1.0 Development version 0.9.3 beta by: Eric Lorenzo Benjamin jr. webmaster (AT) xmoov (DOT) com originally inspired by Stefan Richter at flashcomguru.com bandwidth limiting by Terry streamingflvcom (AT) dedicatedmanagers (DOT) com This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. For more information, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ For the full license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/legalcode or send a letter to Creative Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA. */ // SCRIPT CONFIGURATION //------------------------------------------------------------------------------------------ // MEDIA PATH // // you can configure these settings to point to video files outside the public html folder. //------------------------------------------------------------------------------------------ // points to server root define('XMOOV_PATH_ROOT', ''); // points to the folder containing the video files. define('XMOOV_PATH_FILES', 'video/'); //------------------------------------------------------------------------------------------ // SCRIPT BEHAVIOR //------------------------------------------------------------------------------------------ //set to TRUE to use bandwidth limiting. define('XMOOV_CONF_LIMIT_BANDWIDTH', TRUE); //set to FALSE to prohibit caching of video files. define('XMOOV_CONF_ALLOW_FILE_CACHE', FALSE); //------------------------------------------------------------------------------------------ // BANDWIDTH SETTINGS // // these settings are only needed when using bandwidth limiting. // // bandwidth is limited my sending a limited amount of video data(XMOOV_BW_PACKET_SIZE), // in specified time intervals(XMOOV_BW_PACKET_INTERVAL). // avoid time intervals over 1.5 seconds for best results. // // you can also control bandwidth limiting via http command using your video player. // the function getBandwidthLimit($part) holds three preconfigured presets(low, mid, high), // which can be changed to meet your needs //------------------------------------------------------------------------------------------ //set how many kilobytes will be sent per time interval define('XMOOV_BW_PACKET_SIZE', 90); //set the time interval in which data packets will be sent in seconds. define('XMOOV_BW_PACKET_INTERVAL', 0.3); //set to TRUE to control bandwidth externally via http. define('XMOOV_CONF_ALLOW_DYNAMIC_BANDWIDTH', TRUE); //------------------------------------------------------------------------------------------ // DYNAMIC BANDWIDTH CONTROL //------------------------------------------------------------------------------------------ function getBandwidthLimit($part) { switch($part) { case 'interval' : switch($_GET[XMOOV_GET_BANDWIDTH]) { case 'low' : return 1; break; case 'mid' : return 0.5; break; case 'high' : return 0.3; break; default : return XMOOV_BW_PACKET_INTERVAL; break; } break; case 'size' : switch($_GET[XMOOV_GET_BANDWIDTH]) { case 'low' : return 10; break; case 'mid' : return 40; break; case 'high' : return 90; break; default : return XMOOV_BW_PACKET_SIZE; break; } break; } } //------------------------------------------------------------------------------------------ // INCOMING GET VARIABLES CONFIGURATION // // use these settings to configure how video files, seek position and bandwidth settings are accessed by your player //------------------------------------------------------------------------------------------ define('XMOOV_GET_FILE', 'file'); define('XMOOV_GET_POSITION', 'position'); define('XMOOV_GET_AUTHENTICATION', 'key'); define('XMOOV_GET_BANDWIDTH', 'bw'); // END SCRIPT CONFIGURATION - do not change anything beyond this point if you do not know what you are doing //------------------------------------------------------------------------------------------ // PROCESS FILE REQUEST //------------------------------------------------------------------------------------------ if(isset($_GET[XMOOV_GET_FILE]) && isset($_GET[XMOOV_GET_POSITION])) { // PROCESS VARIABLES # get seek position $seekPos = intval($_GET[XMOOV_GET_POSITION]); # get file name $fileName = htmlspecialchars($_GET[XMOOV_GET_FILE]); # assemble file path $file = XMOOV_PATH_ROOT . XMOOV_PATH_FILES . $fileName; # assemble packet interval $packet_interval = (XMOOV_CONF_ALLOW_DYNAMIC_BANDWIDTH && isset($_GET[XMOOV_GET_BANDWIDTH])) ? getBandwidthLimit('interval') : XMOOV_BW_PACKET_INTERVAL; # assemble packet size $packet_size = ((XMOOV_CONF_ALLOW_DYNAMIC_BANDWIDTH && isset($_GET[XMOOV_GET_BANDWIDTH])) ? getBandwidthLimit('size') : XMOOV_BW_PACKET_SIZE) * 1042; # security improved by by TRUI www.trui.net if (!file_exists($file)) { print('<b>ERROR:</b> xmoov-php could not find (' . $fileName . ') please check your settings.'); exit(); } if(file_exists($file) && strrchr($fileName, '.') == '.flv' && strlen($fileName) > 2 && !eregi(basename($_SERVER['PHP_SELF']), $fileName) && ereg('^[^./][^/]*$', $fileName)) { # stay clean @ob_end_clean(); @set_time_limit(0); # keep binary data safe set_magic_quotes_runtime(0); $fh = fopen($file, 'rb') or die ('<b>ERROR:</b> xmoov-php could not open (' . $fileName . ')'); $fileSize = filesize($file) - (($seekPos > 0) ? $seekPos + 1 : 0); // SEND HEADERS if(!XMOOV_CONF_ALLOW_FILE_CACHE) { # prohibit caching (different methods for different clients) session_cache_limiter("nocache"); header("Expires: Thu, 19 Nov 1981 08:52:00 GMT"); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0"); header("Pragma: no-cache"); } # content headers header("Content-Type: video/x-flv"); header("Content-Disposition: attachment; filename=\"" . $fileName . "\""); header("Content-Length: " . $fileSize); # FLV file format header if($seekPos != 0) { print('FLV'); print(pack('C', 1)); print(pack('C', 1)); print(pack('N', 9)); print(pack('N', 9)); } # seek to requested file position fseek($fh, $seekPos); # output file while(!feof($fh)) { # use bandwidth limiting - by Terry if(XMOOV_CONF_LIMIT_BANDWIDTH) { # get start time list($usec, $sec) = explode(' ', microtime()); $time_start = ((float)$usec + (float)$sec); # output packet print(fread($fh, $packet_size)); # get end time list($usec, $sec) = explode(' ', microtime()); $time_stop = ((float)$usec + (float)$sec); # wait if output is slower than $packet_interval $time_difference = $time_stop - $time_start; # clean up @flush(); @ob_flush(); if($time_difference < (float)$packet_interval) { usleep((float)$packet_interval * 1000000 - (float)$time_difference * 1000000); } } else { # output file without bandwidth limiting print(fread($fh, filesize($file))); } } } } ?>

    Read the article

  • Committed JDO writes do not apply on local GAE HRD, or possibly reused transaction

    - by eeeeaaii
    I'm using JDO 2.3 on app engine. I was using the Master/Slave datastore for local testing and recently switched over to using the HRD datastore for local testing, and parts of my app are breaking (which is to be expected). One part of the app that's breaking is where it sends a lot of writes quickly - that is because of the 1-second limit thing, it's failing with a concurrent modification exception. Okay, so that's also to be expected, so I have the browser retry the writes again later when they fail (maybe not the best hack but I'm just trying to get it working quickly). But a weird thing is happening. Some of the writes which should be succeeding (the ones that DON'T get the concurrent modification exception) are also failing, even though the commit phase completes and the request returns my success code. I can see from the log that the retried requests are working okay, but these other requests that seem to have committed on the first try are, I guess, never "applied." But from what I read about the Apply phase, writing again to that same entity should force the apply... but it doesn't. Code follows. Some things to note: I am attempting to use automatic JDO caching. So this is where JDO uses memcache under the covers. This doesn't actually work unless you wrap everything in a transaction. all the requests are doing is reading a string out of an entity, modifying part of the string, and saving that string back to the entity. If these requests weren't in transactions, you'd of course have the "dirty read" problem. But with transactions, isolation is supposed to be at the level of "serializable" so I don't see what's happening here. the entity being modified is a root entity (not in a group) I have cross-group transactions enabled Another weird thing is happening. If the concurrent modification thing happens, and I subsequently edit more than 5 more entities (this is the max for cross-group transactions), then nothing happens right away, but when I stop and restart the server I get "IllegalArgumentException: operating on too many entity groups in a single transaction". Could it be possible that the PMF is returning the same PersistenceManager every time, or the PM is reusing the same transaction every time? I don't see how I could possibly get the above error otherwise. The code inside the transaction just edits one root entity. I can't think of any other way that GAE would give me the "too many entity groups" error. The relevant code (this is a simplified version) PersistenceManager pm = PMF.getManager(); Transaction tx = pm.currentTransaction(); String responsetext = ""; try { tx.begin(); // I have extra calls to "makePersistent" because I found that relying // on pm.close didn't always write the objects to cache, maybe that // was only a DataNucleus 1.x issue though Key userkey = obtainUserKeyFromCookie(); User u = pm.getObjectById(User.class, userkey); pm.makePersistent(u); // to make sure it gets cached for next time Key mapkey = obtainMapKeyFromQueryString(); // this is NOT a java.util.Map, just FYI Map currentmap = pm.getObjectById(Map.class, mapkey); Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity Text newMapData = parseModifyAndReturn(mapData); // transform the map currentmap.setMapData(newMapData); // mutate the Map object pm.makePersistent(currentmap); // make sure to persist so there is a cache hit tx.commit(); responsetext = "OK"; } catch (JDOCanRetryException jdoe) { // log jdoe responsetext = "RETRY"; } catch (Exception e) { // log e responsetext = "ERROR"; } finally { if (tx.isActive()) { tx.rollback(); } pm.close(); } resp.getWriter().println(responsetext); EDIT: so I have verified that it fails after exactly 5 transactions. Here's what I do: I create a Foo (root entity), do a bunch of concurrent operations on that Foo, and some fail and get retried, and some commit but don't apply (as described above). Then, I start creating more Foos, and do a few operations on those new Foos. If I only create four Foos, stopping and restarting app engine does NOT give me the IllegalArgumentException. However if I create five Foos (which is the limit for cross-group transactions), then when I stop and restart app engine, I do get the exception. So it seems that somehow these new Foos I am creating are counting toward the limit of 5 max entities per transaction, even though they are supposed to be handled by separate transactions. It's as if a transaction is still open and is being reused by the servlet when it handles the new requests for the 2nd through 5th Foos. EDIT2: it looks like the IllegalArgument thing is independent of the other bug. In other words, it always happens when I create five Foos, even if I don't get the concurrent modification exception. I don't know if it's a symptom of the same problem or if it's unrelated. EDIT3: I found out what was causing the (unrelated) IllegalArgumentException, it was a dumb mistake on my part. But the other issue is still happening. EDIT4: added pseudocode for the datastore access EDIT5: I am pretty sure I know why this is happening, but I will still award the bounty to anyone who can confirm it. Basically, I think the problem is that transactions are not really implemented in the local version of the datastore. References: https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/gVMS1dFSpcU https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/deGasFdIO-M https://groups.google.com/forum/?hl=en&fromgroups=#!msg/google-appengine-java/4YuNb6TVD6I/gSttMmHYwo0J Because transactions are not implemented, rollback is essentially a no-op. Therefore, I get a dirty read when two transactions try to modify the record at the same time. In other words, A reads the data and B reads the data at the same time. A attempts to modify the data, and B attempts to modify a different part of the data. A writes to the datastore, then B writes, obliterating A's changes. Then B is "rolled back" by app engine, but since rollbacks are a no-op when running on the local datastore, B's changes stay, and A's do not. Meanwhile, since B is the thread that threw the exception, the client retries B, but does not retry A (since A was supposedly the transaction that succeeded).

    Read the article

  • Logic error for Gauss elimination

    - by iwanttoprogram
    Logic error problem with the Gaussian Elimination code...This code was from my Numerical Methods text in 1990's. The code is typed in from the book- not producing correct output... Sample Run: SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS USING GAUSSIAN ELIMINATION This program uses Gaussian Elimination to solve the system Ax = B, where A is the matrix of known coefficients, B is the vector of known constants and x is the column matrix of the unknowns. Number of equations: 3 Enter elements of matrix [A] A(1,1) = 0 A(1,2) = -6 A(1,3) = 9 A(2,1) = 7 A(2,2) = 0 A(2,3) = -5 A(3,1) = 5 A(3,2) = -8 A(3,3) = 6 Enter elements of [b] vector B(1) = -3 B(2) = 3 B(3) = -4 SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS The solution is x(1) = 0.000000 x(2) = -1.#IND00 x(3) = -1.#IND00 Determinant = -1.#IND00 Press any key to continue . . . The code as copied from the text... //Modified Code from C Numerical Methods Text- June 2009 #include <stdio.h> #include <math.h> #define MAXSIZE 20 //function prototype int gauss (double a[][MAXSIZE], double b[], int n, double *det); int main(void) { double a[MAXSIZE][MAXSIZE], b[MAXSIZE], det; int i, j, n, retval; printf("\n \t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS"); printf("\n \t USING GAUSSIAN ELIMINATION \n"); printf("\n This program uses Gaussian Elimination to solve the"); printf("\n system Ax = B, where A is the matrix of known"); printf("\n coefficients, B is the vector of known constants"); printf("\n and x is the column matrix of the unknowns."); //get number of equations n = 0; while(n <= 0 || n > MAXSIZE) { printf("\n Number of equations: "); scanf ("%d", &n); } //read matrix A printf("\n Enter elements of matrix [A]\n"); for (i = 0; i < n; i++) for (j = 0; j < n; j++) { printf(" A(%d,%d) = ", i + 1, j + 1); scanf("%lf", &a[i][j]); } //read {B} vector printf("\n Enter elements of [b] vector\n"); for (i = 0; i < n; i++) { printf(" B(%d) = ", i + 1); scanf("%lf", &b[i]); } //call Gauss elimination function retval = gauss(a, b, n, &det); //print results if (retval == 0) { printf("\n\t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS\n"); printf("\n\t The solution is"); for (i = 0; i < n; i++) printf("\n \t x(%d) = %lf", i + 1, b[i]); printf("\n \t Determinant = %lf \n", det); } else printf("\n \t SINGULAR MATRIX \n"); return 0; } /* Solves the system of equations [A]{x} = {B} using */ /* the Gaussian elimination method with partial pivoting. */ /* Parameters: */ /* n - number of equations */ /* a[n][n] - coefficient matrix */ /* b[n] - right-hand side vector */ /* *det - determinant of [A] */ int gauss (double a[][MAXSIZE], double b[], int n, double *det) { double tol, temp, mult; int npivot, i, j, l, k, flag; //initialization *det = 1.0; tol = 1e-30; //initial tolerance value npivot = 0; //mult = 0; //forward elimination for (k = 0; k < n; k++) { //search for max coefficient in pivot row- a[k][k] pivot element for (i = k + 1; i < n; i++) { if (fabs(a[i][k]) > fabs(a[k][k])) { //interchange row with maxium element with pivot row npivot++; for (l = 0; l < n; l++) { temp = a[i][l]; a[i][l] = a[k][l]; a[k][l] = temp; } temp = b[i]; b[i] = b[k]; b[k] = temp; } } //test for singularity if (fabs(a[k][k]) < tol) { //matrix is singular- terminate flag = 1; return flag; } //compute determinant- the product of the pivot elements *det = *det * a[k][k]; //eliminate the coefficients of X(I) for (i = k; i < n; i++) { mult = a[i][k] / a[k][k]; b[i] = b[i] - b[k] * mult; //compute constants for (j = k; j < n; j++) //compute coefficients a[i][j] = a[i][j] - a[k][j] * mult; } } //adjust the sign of the determinant if(npivot % 2 == 1) *det = *det * (-1.0); //backsubstitution b[n] = b[n] / a[n][n]; for(i = n - 1; i > 1; i--) { for(j = n; j > i + 1; j--) b[i] = b[i] - a[i][j] * b[j]; b[i] = b[i] / a[i - 1][i]; } flag = 0; return flag; } The solution should be: 1.058824, 1.823529, 0.882353 with det as -102.000000 Any insight is appreciated...

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

< Previous Page | 133 134 135 136 137 138 139  | Next Page >