Search Results

Search found 10206 results on 409 pages for 'upk release'.

Page 404/409 | < Previous Page | 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Thread Blocks During Call

    - by user578875
    I have a serious problem, I'm developing an application that mesures on call time during a call; the problem presents when, with the phone on the ear, the thread that the timer has, blocks and no longer responds before taking off my ear. The next log shows the problem. 01-11 16:14:19.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:20.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:21.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:22.597 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:23.608 14558 14566 I Estado : postDelayed Async Service 01-11 16:14:24.017 1106 1106 D iddd : select() < 0, Probably a handled signal: Interrupted system call 01-11 16:14:24.607 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:05.500 1106 1106 D iddd : select() < 0, Probably a handled signal: Interrupted system call 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service 01-11 16:18:06.026 14558 14566 I Estado : postDelayed Async Service I've been trying with Services, Timers, Threads, AyncTasks and they all present the same problem. My Code: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); setContentView(R.layout.main); HangUpService.setMainActivity(this); objHangUpService = new Intent(this, HangUpService.class); Runnable rAccion = new Runnable() { public void run() { TelephonyManager tm = (TelephonyManager)getSystemService(TELEPHONY_SERVICE); tm.listen(mPhoneListener, PhoneStateListener.LISTEN_CALL_STATE); objVibrator = (Vibrator) getSystemService(getApplicationContext().VIBRATOR_SERVICE); final ListView lstLlamadas = (ListView) findViewById(R.id.lstFavoritos); final EditText txtMinutos = (EditText) findViewById(R.id.txtMinutos); final EditText txtSegundos = (EditText) findViewById(R.id.txtSegundos); ArrayList<Contacto> cContactos = new ArrayList<Contacto>(); ContactoAdapter caContactos = new ContactoAdapter(HangUp.this, R.layout.row,cContactos); Cursor curContactos = getContentResolver().query( ContactsContract.Contacts.CONTENT_URI, null, null, null, ContactsContract.Contacts.TIMES_CONTACTED + " DESC"); while (curContactos.moveToNext()){ String strNombre = curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME)); String strID = curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts._ID)); String strHasPhone=curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts.HAS_PHONE_NUMBER)); String strStarred=curContactos.getString(curContactos.getColumnIndex(ContactsContract.Contacts.STARRED)); if (Integer.parseInt(strHasPhone) > 0 && Integer.parseInt(strStarred) ==1 ) { Cursor CursorTelefono = getContentResolver().query( ContactsContract.CommonDataKinds.Phone.CONTENT_URI, null, ContactsContract.CommonDataKinds.Phone.CONTACT_ID +" = " + strID, null, null); while (CursorTelefono.moveToNext()) { String strTipo=CursorTelefono.getString(CursorTelefono.getColumnIndex(ContactsContract.CommonDataKinds.Phone.TYPE)); String strTelefono=CursorTelefono.getString(CursorTelefono.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER)); strNumero=strTelefono; String args[]=new String[1]; args[0]=strNumero; Cursor CursorCallLog = getContentResolver().query( android.provider.CallLog.Calls.CONTENT_URI, null, android.provider.CallLog.Calls.NUMBER + "=?", args, android.provider.CallLog.Calls.DATE+ " DESC"); if (Integer.parseInt(strTipo)==2) { caContactos.add( new Contacto( strNombre, strTelefono ) ); } } CursorTelefono.close(); } } curContactos.close(); lstLlamadas.setAdapter(caContactos); lstLlamadas.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView a, View v, int position, long id) { Contacto mContacto=(Contacto)lstLlamadas.getItemAtPosition(position); i = new Intent(HangUp.this, Llamada.class); Log.i("Estado","Declaro Intent"); Bundle bundle = new Bundle(); bundle.putString("telefono", mContacto.getTelefono()); i.putExtras(bundle); startActivityForResult(i,SUB_ACTIVITY_ID); Log.i("Estado","Inicio Intent"); blActivo=true; try { String strMinutos=txtMinutos.getText().toString(); String strSegundos=txtSegundos.getText().toString(); if(!strMinutos.equals("") && !strSegundos.equals("")){ int Tiempo = ( (Integer.parseInt(txtMinutos.getText().toString())*60) + Integer.parseInt(txtSegundos.getText().toString()) )* 1000; handler.removeCallbacks(rVibrate); cTime = System.currentTimeMillis(); cTime=cTime+Tiempo; objHangUpAsync = new HangUpAsync(cTime,objVibrator,objPowerManager,objKeyguardLock); objHangUpAsync.execute(); objPowerManager.userActivity(Tiempo+3000, true); objHangUpService.putExtra("cTime", cTime); //startService(objHangUpService); } catch (Exception e) { e.printStackTrace(); } finally { } } }); } }; } AsyncTask: @Override protected String doInBackground(String... arg0) { blActivo = true; mWakeLock = objPowerManager.newWakeLock(PowerManager.FULL_WAKE_LOCK, "My Tag"); objKeyguardLock.disableKeyguard(); Log.i("Estado", "Entro a doInBackground"); timer.scheduleAtFixedRate( new TimerTask() { public void run() { if (blActivo){ if (cTime blActivo=false; objVibrator.vibrate(1000); Log.i("Estado","Vibrar desde Async"); this.cancel(); }else{ try{ mWakeLock.acquire(); mWakeLock.release(); Log.i("Estado","postDelayed Async Service"); }catch(Exception e){ Log.i("Estado","Error: " + e.getMessage()); } } } } }, 0, INTERVAL); return null; }

    Read the article

  • SUDS rendering a duplicate node and wrapping everything in it

    - by PylonsN00b
    Here is my code: #Make the SOAP connection url = "https://api.channeladvisor.com/ChannelAdvisorAPI/v1/InventoryService.asmx?WSDL" headers = {'Content-Type': 'text/xml; charset=utf-8'} ca_client_inventory = Client(url, location="https://api.channeladvisor.com/ChannelAdvisorAPI/v1/InventoryService.asmx", headers=headers) #Make the SOAP headers login = ca_client_inventory.factory.create('APICredentials') login.DeveloperKey = 'REMOVED' login.Password = 'REMOVED' #Attach the headers ca_client_inventory.set_options(soapheaders=login) synch_inventory_item_list = ca_client_inventory.factory.create('SynchInventoryItemList') synch_inventory_item_list.accountID = "REMOVED" array_of_inventory_item_submit = ca_client_inventory.factory.create('ArrayOfInventoryItemSubmit') for product in products: inventory_item_submit = ca_client_inventory.factory.create('InventoryItemSubmit') inventory_item_list = get_item_list(product) inventory_item_submit = [inventory_item_list] array_of_inventory_item_submit.InventoryItemSubmit.append(inventory_item_submit) synch_inventory_item_list.itemList = array_of_inventory_item_submit #Call that service baby! ca_client_inventory.service.SynchInventoryItemList(synch_inventory_item_list) Here is what it outputs: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:ns0="http://api.channeladvisor.com/webservices/" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tns="http://api.channeladvisor.com/webservices/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header> <tns:APICredentials> <tns:DeveloperKey>REMOVED</tns:DeveloperKey> <tns:Password>REMOVED</tns:Password> </tns:APICredentials> </SOAP-ENV:Header> <ns1:Body> <ns0:SynchInventoryItemList> <ns0:accountID> <ns0:accountID>REMOVED</ns0:accountID> <ns0:itemList> <ns0:InventoryItemSubmit> <ns0:Sku>1872</ns0:Sku> <ns0:Title>The Big Book Of Crazy Quilt Stitches</ns0:Title> <ns0:Subtitle></ns0:Subtitle> <ns0:Description>Embellish the seams and patches of crazy quilt projects with over 75 embroidery stitches and floral motifs. You&apos;ll use this handy reference book again and again to dress up wall hangings, pillows, sachets, clothing, and other nostalgic creations.</ns0:Description> <ns0:Weight>4</ns0:Weight> <ns0:FlagStyle/> <ns0:IsBlocked xsi:nil="true"/> <ns0:ISBN></ns0:ISBN> <ns0:UPC>028906018721</ns0:UPC> <ns0:EAN></ns0:EAN> <ns0:QuantityInfo> <ns0:UpdateType>UnShipped</ns0:UpdateType> <ns0:Total>0</ns0:Total> </ns0:QuantityInfo> <ns0:PriceInfo> <ns0:Cost>0.575</ns0:Cost> <ns0:RetailPrice xsi:nil="true"/> <ns0:StartingPrice xsi:nil="true"/> <ns0:ReservePrice xsi:nil="true"/> <ns0:TakeItPrice>6.95</ns0:TakeItPrice> <ns0:SecondChanceOfferPrice xsi:nil="true"/> <ns0:StorePrice>6.95</ns0:StorePrice> </ns0:PriceInfo> <ns0:ClassificationInfo> <ns0:Name>Books</ns0:Name> <ns0:AttributeList> <ns0:ClassificationAttributeInfo> <ns0:Name>Designer/Author</ns0:Name> <ns0:Value>Patricia Eaton</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Trim Size</ns0:Name> <ns0:Value></ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Binding</ns0:Name> <ns0:Value>Leaflet</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Release Date</ns0:Name> <ns0:Value>11/1/1999 0:00:00</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Skill Level</ns0:Name> <ns0:Value></ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Pages</ns0:Name> <ns0:Value>20</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Projects</ns0:Name> <ns0:Value></ns0:Value> </ns0:ClassificationAttributeInfo> </ns0:AttributeList> </ns0:ClassificationInfo> <ns0:ImageList> <ns0:ImageInfoSubmit> <ns0:PlacementName>ITEMIMAGEURL1</ns0:PlacementName> <ns0:FilenameOrUrl>1872.jpg</ns0:FilenameOrUrl> </ns0:ImageInfoSubmit> </ns0:ImageList> </ns0:InventoryItemSubmit> </ns0:itemList> </ns0:accountID> </ns0:SynchInventoryItemList> </ns1:Body> </SOAP-ENV:Envelope> See how it creates the accountID node twice and wraps the whole thing in it? WHY? How do I make it stop that?!

    Read the article

  • Yet another C# Deadlock Debugging Question

    - by Roo
    Hi All, I have a multi-threaded application build in C# using VS2010 Professional. It's quite a large application and we've experienced the classing GUI cross-threading and deadlock issues before, but in the past month we've noticed the appears to lock up when left idle for around 20-30 minutes. The application is irresponsive and although it will repaint itself when other windows are dragged in front of the application and over it, the GUI still appears to be locked... interstingly (unlike if the GUI thread is being used for a considerable amount of time) the Close, Maximise and minimise buttons are also irresponsive and when clicked the little (Not Responding...) text is not displayed in the title of the application i.e. Windows still seems to think it's running fine. If I break/pause the application using the debugger, and view the threads that are running. There are 3 threads of our managed code that are running, and a few other worker threads whom the source code cannot be displayed for. The 3 threads that run are: The main/GUI thread A thread that loops indefinitely A thread that loops indefinitely If I step into threads 2 and 3, they appear to be looping correctly. They do not share locks (even with the main GUI thread) and they are not using the GUI thread at all. When stepping into the main/GUI thread however, it's broken on Application.Run... This problem screams deadlock to me, but what I don't understand is if it's deadlock, why can't I see the line of code the main/GUI thread is hanging on? Any help will be greatly appreciated! Let me know if you need more information... Cheers, Roo -----------------------------------------------------SOLUTION-------------------------------------------------- Okay, so the problem is now solved. Thanks to everyone for their suggestions! Much appreciated! I've marked the answer that solved my initial problem of determining where on the main/UI thread the application hangs (I handn't turned off the "Enable Just My Code" option). The overall issue I was experiencing was indeed Deadlock, however. After obtaining the call-stack and popping the top half of it into Google I came across this which explains exactly what I was experiencing... http://timl.net/ This references a lovely guide to debugging the issue... http://www.aaronlerch.com/blog/2008/12/15/debugging-ui/ This identified a control I was constructing off the GUI thread. I did know this, however, and was marshalling calls correctly, but what I didn't realise was that behind the scenes this Control was subscribing to an event or set of events that are triggered when e.g. a Windows session is unlocked or the screensaver exits. These calls are always made on the main/UI thread and were blocking when it saw the call was made on the incorrect thread. Kim explains in more detail here... http://krgreenlee.blogspot.com/2007/09/onuserpreferencechanged-hang.html In the end I found an alternative solution which did not require this Control off the main/UI thread. That appears to have solved the problem and the application no longer hangs. I hope this helps anyone who's confronted by a similar problem. Thanks again to everyone on here who helped! (and indirectly, the delightful bloggers I've referenced above!) Roo -----------------------------------------------------SOLUTION II-------------------------------------------------- Aren't threading issues delightful...you think you've solved it, and a month down the line it pops back up again. I still believe the solution above resolved an issue that would cause simillar behaviour, but we encountered the problem again. As we spent a while debugging this, I thought I'd update this question with our (hopefully) final solution: The problem appears to have been a bug in the Infragistics components in the WinForms 2010.1 release (no hot fixes). We had been running from around the time the freeze issue appeared (but had also added a bunch of other stuff too). After upgrading to WinForms 2010.3, we've yet to reproduce the issue (deja vu). See my question here for a bit more information: 'http://stackoverflow.com/questions/4077822/net-4-0-and-the-dreaded-onuserpreferencechanged-hang'. Hans has given a nice summary of the general issue. I hope this adds a little to the suggestions/information surrounding the nutorious OnUserPreferenceChanged Hang (or whatever you'd like to call it). Cheers, Roo

    Read the article

  • Compile error with initializer_list when trying to use it to initialize member value of class

    - by ilektron
    I am trying to make a class initializable from an initialization_list in a class constructor's constructor's initialization list. It works for a std::map, but not for my custom class. I don't see any difference other than templates are used in std::map. #include <iostream> #include <initializer_list> #include <string> #include <sstream> #include <map> using std::string; class text_thing { private: string m_text; public: text_thing() { } text_thing(text_thing& other); text_thing(std::initializer_list< std::pair<const string, const string> >& il); text_thing& operator=(std::initializer_list< std::pair<const string, const string> >& il); operator string() { return m_text; } }; class static_base { private: std::map<string, string> m_test_map; text_thing m_thing; static_base(); public: static static_base& getInstance() { static static_base instance; return instance; } string getText() { return (string)m_thing; } }; typedef std::pair<const string, const string> spair; text_thing::text_thing(text_thing& other) { m_text = other.m_text; } text_thing::text_thing(std::initializer_list< std::pair<const string, const string> >& il) { std::stringstream text_gen; for (auto& apair : il) { text_gen << "{" << apair.first << ", " << apair.second << "}" << std::endl; } } text_thing& text_thing::operator=(std::initializer_list< std::pair<const string, const string> >& il) { std::stringstream text_gen; for (auto& apair : il) { text_gen << "{" << apair.first << ", " << apair.second << "}" << std::endl; } return *this; } static_base::static_base() : m_test_map{{"test", "1"}, {"test2", "2"}}, // Compiler fine with this m_thing{{"test", "1"}, {"test2", "2"}} // Compiler doesn't like this { } int main() { std::cout << "Starting the program" << std::endl; std::cout << "The text thing: " << std::endl << static_base::getInstance().getText(); } I get this compiler output g++ -O0 -g3 -Wall -c -fmessage-length=0 -std=c++11 -MMD -MP -MF"static_base.d" -MT"static_base.d" -o "static_base.o" "../static_base.cpp" Finished building: ../static_base.cpp Building file: ../test.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -std=c++11 -MMD -MP -MF"test.d" -MT"test.d" -o "test.o" "../test.cpp" ../test.cpp: In constructor ‘static_base::static_base()’: ../test.cpp:94:40: error: no matching function for call to ‘text_thing::text_thing(<brace-enclosed initializer list>)’ m_thing{{"test", "1"}, {"test2", "2"}} ^ ../test.cpp:94:40: note: candidates are: ../test.cpp:72:1: note: text_thing::text_thing(std::initializer_list<std::pair<const std::basic_string<char>, const std::basic_string<char> > >&) text_thing::text_thing(std::initializer_list< std::pair<const string, const string> >& il) ^ ../test.cpp:72:1: note: candidate expects 1 argument, 2 provided ../test.cpp:67:1: note: text_thing::text_thing(text_thing&) text_thing::text_thing(text_thing& other) ^ ../test.cpp:67:1: note: candidate expects 1 argument, 2 provided ../test.cpp:23:2: note: text_thing::text_thing() text_thing() ^ ../test.cpp:23:2: note: candidate expects 0 arguments, 2 provided make: *** [test.o] Error 1 Output of gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.8/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.8.1-2ubuntu1~13.04' --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.8 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.8.1 (Ubuntu 4.8.1-2ubuntu1~13.04) It compiles fine with the std::map constructed this way, and if I modify the static_base to return the strings from the maps, all is fine and dandy. Please help me understand what is going on here.

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Collections not read from hibernate/ehcache second-level-cache

    - by Mark van Venrooij
    I'm trying to cache lazy loaded collections with ehcache/hibernate in a Spring project. When I execute a session.get(Parent.class, 123) and browse through the children multiple times a query is executed every time to fetch the children. The parent is only queried the first time and then resolved from the cache. Probably I'm missing something, but I can't find the solution. Please see the relevant code below. I'm using Spring (3.2.4.RELEASE) Hibernate(4.2.1.Final) and ehcache(2.6.6) The parent class: @Entity @Table(name = "PARENT") @Cacheable @Cache(usage = CacheConcurrencyStrategy.READ_WRITE, include = "all") public class ServiceSubscriptionGroup implements Serializable { /** The Id. */ @Id @Column(name = "ID") private int id; @OneToMany(fetch = FetchType.LAZY, mappedBy = "parent") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) private List<Child> children; public List<Child> getChildren() { return children; } public void setChildren(List<Child> children) { this.children = children; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Parent that = (Parent) o; if (id != that.id) return false; return true; } @Override public int hashCode() { return id; } } The child class: @Entity @Table(name = "CHILD") @Cacheable @Cache(usage = CacheConcurrencyStrategy.READ_WRITE, include = "all") public class Child { @Id @Column(name = "ID") private int id; @ManyToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL) @JoinColumn(name = "PARENT_ID") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) private Parent parent; public int getId() { return id; } public void setId(final int id) { this.id = id; } private Parent getParent(){ return parent; } private void setParent(Parent parent) { this.parent = parent; } @Override public boolean equals(final Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } final Child that = (Child) o; return id == that.id; } @Override public int hashCode() { return id; } } The application context: <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="annotatedClasses"> <list> <value>Parent</value> <value>Child</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServer2008Dialect</prop> <prop key="hibernate.hbm2ddl.auto">validate</prop> <prop key="hibernate.ejb.naming_strategy">org.hibernate.cfg.ImprovedNamingStrategy</prop> <prop key="hibernate.connection.charSet">UTF-8</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <!-- cache settings ehcache--> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.region.factory_class"> org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory</prop> <prop key="hibernate.generate_statistics">true</prop> <prop key="hibernate.cache.use_structured_entries">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.transaction.factory_class"> org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory</prop> <prop key="hibernate.transaction.jta.platform"> org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform</prop> </props> </property> </bean> The testcase I'm running: @Test public void testGetParentFromCache() { for (int i = 0; i <3 ; i++ ) { getEntity(); } } private void getEntity() { Session sess = sessionFactory.openSession() sess.setCacheMode(CacheMode.NORMAL); Transaction t = sess.beginTransaction(); Parent p = (Parent) s.get(Parent.class, 123); Assert.assertNotNull(p); Assert.assertNotNull(p.getChildren().size()); t.commit(); sess.flush(); sess.clear(); sess.close(); } In the logging I can see that the first time 2 queries are executed getting the parent and getting the children. Furthermore the logging shows that the child entities as well as the collection are stored in the 2nd level cache. However when reading the collection a query is executed to fetch the children on second and third attempt.

    Read the article

  • Record audio via MediaRecorder

    - by Isuru Madusanka
    I am trying to record audio by MediaRecorder, and I get an error, I tried to change everything and nothing works. Last two hours I try to find the error, I used Log class too and I found out that error occurred when it call recorder.start() method. What could be the problem? public class AudioRecorderActivity extends Activity { MediaRecorder recorder; File audioFile = null; private static final String TAG = "AudioRecorderActivity"; private View startButton; private View stopButton; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); startButton = findViewById(R.id.start); stopButton = findViewById(R.id.stop); setContentView(R.layout.main); } public void startRecording(View view) throws IOException{ startButton.setEnabled(false); stopButton.setEnabled(true); File sampleDir = Environment.getExternalStorageDirectory(); try{ audioFile = File.createTempFile("sound", ".3gp", sampleDir); }catch(IOException e){ Toast.makeText(getApplicationContext(), "SD Card Access Error", Toast.LENGTH_LONG).show(); Log.e(TAG, "Sdcard access error"); return; } recorder = new MediaRecorder(); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder.setAudioEncodingBitRate(16); recorder.setAudioSamplingRate(44100); recorder.setOutputFile(audioFile.getAbsolutePath()); recorder.prepare(); recorder.start(); } public void stopRecording(View view){ startButton.setEnabled(true); stopButton.setEnabled(false); recorder.stop(); recorder.release(); addRecordingToMediaLibrary(); } protected void addRecordingToMediaLibrary(){ ContentValues values = new ContentValues(4); long current = System.currentTimeMillis(); values.put(MediaStore.Audio.Media.TITLE, "audio" + audioFile.getName()); values.put(MediaStore.Audio.Media.DATE_ADDED, (int)(current/1000)); values.put(MediaStore.Audio.Media.MIME_TYPE, "audio/3gpp"); values.put(MediaStore.Audio.Media.DATA, audioFile.getAbsolutePath()); ContentResolver contentResolver = getContentResolver(); Uri base = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; Uri newUri = contentResolver.insert(base, values); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, newUri)); Toast.makeText(this, "Added File" + newUri, Toast.LENGTH_LONG).show(); } } And here is the xml layout. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/RelativeLayout1" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <Button android:id="@+id/start" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="146dp" android:onClick="startRecording" android:text="Start Recording" /> <Button android:id="@+id/stop" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignLeft="@+id/start" android:layout_below="@+id/start" android:layout_marginTop="41dp" android:enabled="false" android:onClick="stopRecording" android:text="Stop Recording" /> </RelativeLayout> And I added permission to AndroidManifest file. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="in.isuru.audiorecorder" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".AudioRecorderActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.RECORD_AUDIO" /> </manifest> I need to record high quality audio. Thanks!

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Qt C++ signals and slots did not fire

    - by Xegara
    I have programmed Qt a couple of times already and I really like the signals and slots feature. But now, I guess I'm having a problem when a signal is emitted from one thread, the corresponding slot from another thread is not fired. The connection was made in the main program. This is also my first time to use Qt for ROS which uses CMake. The signal fired by the QThread triggered their corresponding slots but the emitted signal of my class UserInput did not trigger the slot in tflistener where it supposed to. I have tried everything I can. Any help? The code is provided below. Main.cpp #include <QCoreApplication> #include <QThread> #include "userinput.h" #include "tfcompleter.h" int main(int argc, char** argv) { QCoreApplication app(argc, argv); QThread *thread1 = new QThread(); QThread *thread2 = new QThread(); UserInput *input1 = new UserInput(); TfCompleter *completer = new TfCompleter(); QObject::connect(input1, SIGNAL(togglePause2()), completer, SLOT(toggle())); QObject::connect(thread1, SIGNAL(started()), completer, SLOT(startCounting())); QObject::connect(thread2, SIGNAL(started()), input1, SLOT(start())); completer->moveToThread(thread1); input1->moveToThread(thread2); thread1->start(); thread2->start(); app.exec(); return 0; } What I want to do is.. There are two seperate threads. One thread is for the user input. When the user enters [space], the thread emits a signal to toggle the boolean member field of the other thread. The other thread 's task is to just continue its process if the user wants it to run, otherwise, the user does not want it to run. I wanted to grant the user to toggle the processing anytime that he wants, that's why I decided to bring them into seperate threads. The following codes are the tflistener and userinput. tfcompleter.h #ifndef TFCOMPLETER_H #define TFCOMPLETER_H #include <QObject> #include <QtCore> class TfCompleter : public QObject { Q_OBJECT private: bool isCount; public Q_SLOTS: void toggle(); void startCounting(); }; #endif tflistener.cpp #include "tfcompleter.h" #include <iostream> void TfCompleter::startCounting() { static uint i = 0; while(true) { if(isCount) std::cout << i++ << std::endl; } } void TfCompleter::toggle() { // isCount = ~isCount; std::cout << "isCount " << std::endl; } UserInput.h #ifndef USERINPUT_H #define USERINPUT_H #include <QObject> #include <QtCore> class UserInput : public QObject { Q_OBJECT public Q_SLOTS: void start(); // Waits for the keypress from the user and emits the corresponding signal. public: Q_SIGNALS: void togglePause2(); }; #endif UserInput.cpp #include "userinput.h" #include <iostream> #include <cstdio> // Implementation of getch #include <termios.h> #include <unistd.h> /* reads from keypress, doesn't echo */ int getch(void) { struct termios oldattr, newattr; int ch; tcgetattr( STDIN_FILENO, &oldattr ); newattr = oldattr; newattr.c_lflag &= ~( ICANON | ECHO ); tcsetattr( STDIN_FILENO, TCSANOW, &newattr ); ch = getchar(); tcsetattr( STDIN_FILENO, TCSANOW, &oldattr ); return ch; } void UserInput::start() { char c = 0; while (true) { c = getch(); if (c == ' ') { Q_EMIT togglePause2(); std::cout << "SPACE" << std::endl; } c = 0; } } Here is the CMakeLists.txt. I just placed it here also since I don't know maybe the CMake has also a factor here. CMakeLists.txt ############################################################################## # CMake ############################################################################## cmake_minimum_required(VERSION 2.4.6) ############################################################################## # Ros Initialisation ############################################################################## include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake) rosbuild_init() set(CMAKE_AUTOMOC ON) #set the default path for built executables to the "bin" directory set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin) #set the default path for built libraries to the "lib" directory set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib) # Set the build type. Options are: # Coverage : w/ debug symbols, w/o optimization, w/ code-coverage # Debug : w/ debug symbols, w/o optimization # Release : w/o debug symbols, w/ optimization # RelWithDebInfo : w/ debug symbols, w/ optimization # MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries #set(ROS_BUILD_TYPE Debug) ############################################################################## # Qt Environment ############################################################################## # Could use this, but qt-ros would need an updated deb, instead we'll move to catkin # rosbuild_include(qt_build qt-ros) rosbuild_find_ros_package(qt_build) include(${qt_build_PACKAGE_PATH}/qt-ros.cmake) rosbuild_prepare_qt4(QtCore) # Add the appropriate components to the component list here ADD_DEFINITIONS(-DQT_NO_KEYWORDS) ############################################################################## # Sections ############################################################################## #file(GLOB QT_FORMS RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} ui/*.ui) #file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc) file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/rgbdslam_client/*.hpp) #QT4_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES}) #QT4_WRAP_UI(QT_FORMS_HPP ${QT_FORMS}) QT4_WRAP_CPP(QT_MOC_HPP ${QT_MOC}) ############################################################################## # Sources ############################################################################## file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp) ############################################################################## # Binaries ############################################################################## rosbuild_add_executable(rgbdslam_client ${QT_SOURCES} ${QT_MOC_HPP}) #rosbuild_add_executable(rgbdslam_client ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP}) target_link_libraries(rgbdslam_client ${QT_LIBRARIES})

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Custom Section Name Crashing NSFetchedResultsController

    - by Mike H.
    I have a managed object with a dueDate attribute. Instead of displaying using some ugly date string as the section headers of my UITableView I created a transient attribute called "category" and defined it like so: - (NSString*)category { [self willAccessValueForKey:@"category"]; NSString* categoryName; if ([self isOverdue]) { categoryName = @"Overdue"; } else if ([self.finishedDate != nil]) { categoryName = @"Done"; } else { categoryName = @"In Progress"; } [self didAccessValueForKey:@"category"]; return categoryName; } Here is the NSFetchedResultsController set up: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSMutableArray* descriptors = [[NSMutableArray alloc] init]; NSSortDescriptor *dueDateDescriptor = [[NSSortDescriptor alloc] initWithKey:@"dueDate" ascending:YES]; [descriptors addObject:dueDateDescriptor]; [dueDateDescriptor release]; [fetchRequest setSortDescriptors:descriptors]; fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"category" cacheName:@"Root"]; The table initially displays fine, showing the unfinished items whose dueDate has not passed in a section titled "In Progress". Now, the user can tap a row in the table view which pushes a new details view onto the navigation stack. In this new view the user can tap a button to indicate that the item is now "Done". Here is the handler for the button (self.task is the managed object): - (void)taskDoneButtonTapped { self.task.finishedDate = [NSDate date]; } As soon as the value of the "finishedDate" attribute changes I'm hit with this exception: 2010-03-18 23:29:52.476 MyApp[1637:207] Serious application error. Exception was caught during Core Data change processing: no section named 'Done' found with userInfo (null) 2010-03-18 23:29:52.477 MyApp[1637:207] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'no section named 'Done' found' I've managed to figure out that the UITableView that is currently hidden by the new details view is trying to update its rows and sections because the NSFetchedResultsController was notified that something changed in the data set. Here's my table update code (copied from either the Core Data Recipes sample or the CoreBooks sample -- I can't remember which): - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self configureCell:[self.tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; // Reloading the section inserts a new row and ensures that titles are updated appropriately. [self.tableView reloadSections:[NSIndexSet indexSetWithIndex:newIndexPath.section] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } I put breakpoints in each of these functions and found that only controllerWillChange is called. The exception is thrown before either controller:didChangeObject:atIndexPath:forChangeType:newIndex or controller:didChangeSection:atIndex:forChangeType are called. At this point I'm stuck. If I change my sectionNameKeyPath to just "dueDate" then everything works fine. I think that's because the dueDate attribute never changes whereas the category will be different when read back after the finishedDate attribute changes. Please help! UPDATE: Here is my UITableViewDataSource code: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [[self.fetchedResultsController sections] count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; return [sectionInfo numberOfObjects]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [self.tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } [self configureCell:cell atIndexPath:indexPath]; return cell; } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; return [sectionInfo name]; }

    Read the article

  • No Hibernate Session bound to thread exception

    - by Benchik
    I have Hibernate 3.6.0.Final and Spring 3.0.0.RELEASE I get "No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here" If I add the thread specification back in, I get "saveOrUpdate is not valid without active transaction" Any ideas? The spring-config.xml: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:mem:jsf2demo"/> <property name="username" value="sa"/> <property name="password" value=""/> </bean> <bean id="sampleSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="sampleDataSource"/> <property name="annotatedClasses"> <list> <value>com.maxheapsize.jsf2demo.Book</value> </list> </property> <property name="hibernateProperties"> <props> <!-- prop key="hibernate.connection.pool_size">0</prop--> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop> <!-- prop key="transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</prop> <prop key="hibernate.current_session_context_class">thread</prop--> <prop key="hibernate.hbm2ddl.auto">create</prop> </props> </property> </bean> <bean id="sampleDataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value> jdbc:hsqldb:file:/spring/db/springdb;SHUTDOWN=true </value> </property> <property name="username" value="sa"/> <property name="password" value=""/> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sampleSessionFactory"/> </bean> <bean id="daoTxTemplate" abstract="true" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributes"> <props> <prop key="create*"> PROPAGATION_REQUIRED,ISOLATION_READ_COMMITTED </prop> <prop key="get*"> PROPAGATION_REQUIRED,ISOLATION_READ_COMMITTED </prop> </props> </property> </bean> <bean name="openSessionInViewInterceptor" class="org.springframework.orm.hibernate3.support.OpenSessionInViewInterceptor"> <property name="sessionFactory" ref="sampleSessionFactory"/> <property name="singleSession" value="true"/> </bean> <bean id="nameDao" parent="daoTxTemplate"> <property name="target"> <bean class="com.maxheapsize.dao.NameDao"> <property name="sessionFactory" ref="sampleSessionFactory"/> </bean> </property> </bean> and the DAO: public class NameDao { private SessionFactory sessionFactory; public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } public SessionFactory getSessionFactory() { return sessionFactory; } @Transactional @SuppressWarnings("unchecked") public List<Name> getAll() { Session session = this.sessionFactory.getCurrentSession(); List<Name> names = (List<Name>)session.createQuery("from Name").list(); return names; } //@Transactional(propagation= Propagation.REQUIRED, readOnly=false) @Transactional public void save(Name name){ Session session = this.sessionFactory.getCurrentSession(); session.saveOrUpdate(name); session.flush(); } }

    Read the article

  • how to save state of dynamically created editTexts

    - by user922531
    I'm stuck at how to save the state of my EditTexts on screen orientation. Currently if text is inputted into the EditTexts and the screen is orientated, the fields are wiped (as expected). I am already calling onSaveInstanceState and saving a String, but I have no clue on how to save the EditTexts which are created in code and then retrieve them and add them to the EditTexts when redrawing the activity. Snippet of my code: My main activity is as follows: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // get the multidim array b = getIntent().getBundleExtra("obj"); m = (Methods) b.getSerializable("Methods"); // method to draw the layout InitialiseUI(); // Restore UI state from the savedInstanceState. if (savedInstanceState != null) { String strValue = savedInstanceState.getString("light"); if (strValue != null) { FLight = strValue; } } try { mCamera = Camera.open(); if (FLight.equals("true")) { flashLight(); } } catch (Exception e) { Log.d(TAG, "Thrown exception onCreate() camera: " + e); } } // end onCreate /** Called when the back button is pressed. */ @Override public void onResume() { super.onResume(); try { mCamera = Camera.open(); if (FLight.equals("true")) { flashLight(); } } catch (Exception e) { Log.d(TAG, "Thrown exception onCreate() camera: " + e); } } // end onCreate /** saves data before leaving the screen */ @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); outState.putString("light", FLight); } /** called when exiting / leaving the screen */ @Override protected void onPause() { super.onPause(); Log.d(TAG, "onPause()"); if (mCamera != null) { mCamera.stopPreview(); mCamera.release(); mCamera = null; } } /* * set up the UI elements - add click listeners to buttons used in * onCreate() and onConfigurationChanged() * * Set the editTexts fields to show the previous readings as Hints */ public void InitialiseUI() { Log.d(TAG, "Start of InitialiseUI, Main activity"); // get a reference to the TableLayout final TableLayout myTLreads = (TableLayout) findViewById(R.id.myTLreads); // Create arrays to hold the TVs and ETs final TextView[] myTextViews = new TextView[m.getNoRows()]; // create an empty array; final EditText[] myEditTexts = new EditText[m.getNoRows()]; // create an empty array; for(int i =0; i<=m.getNoRows()-1;i++ ){ TableRow tr=new TableRow(this); tr.setLayoutParams(new LayoutParams( LayoutParams.MATCH_PARENT, LayoutParams.WRAP_CONTENT)); // create a new textview / editText final TextView rowTextView = new TextView(this); final EditText rowEditText = new EditText(this); // setWidth is needed otherwise my landscape layout is OFF rowEditText.setWidth(400); // this stops the keyboard taking up the whole screen in landscape layout rowEditText.setImeOptions(EditorInfo.IME_FLAG_NO_EXTRACT_UI); // add some padding to the right of the TV rowTextView.setPadding(0,0,10,0); // set colors to white rowTextView.setTextColor(Color.parseColor("#FFFFFF")); rowEditText.setTextColor(Color.parseColor("#FFFFFF")); // if readings already sent today set color to yellow if(m.getTransmit(i+1)==false){ rowEditText.setEnabled(false); rowEditText.setHintTextColor(Color.parseColor("#FFFF00")); } // set the text of the TV to the meter name rowTextView.setText(m.getMeterName(i+1)); // set the hint of the ET to the last submitted reading rowEditText.setHint(m.getLastReadString(i+1)); // add the textview to the linearlayout rowEditText.setInputType(InputType.TYPE_CLASS_PHONE);//InputType.TYPE_NUMBER_FLAG_DECIMAL); tr.addView(rowTextView); tr.addView(rowEditText); myTLreads.addView(tr); // add a reference to the textView myTextViews[i] = rowTextView; myEditTexts[i] = rowEditText; } final Button submit = (Button) findViewById(R.id.submitReadings); // add a click listener to the button try { submit.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Log.d(TAG, "Submit button clicked, Main activity"); preSubmitCheck(m.getAccNo(), m.getPostCode(), myEditTexts); // method to do HTML getting and sending } }); } catch (Exception e) { Log.d(TAG, "Exceptions (submit button)" + e.toString()); } }// end of InitialiseUI I don't need to do anything with these values until a button is clicked. Would it be easier if they were a ListView, i'm guessing I would still have the problem of saving them and retrieving them on rotation. If it helps I have an object m which is a string[][] I could temporarily somehow store them in

    Read the article

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • ASP.Net MVC 2 Auto Complete Textbox With Custom View Model Attribute & EditorTemplate

    - by SeanMcAlinden
    In this post I’m going to show how to create a generic, ajax driven Auto Complete text box using the new MVC 2 Templates and the jQuery UI library. The template will be automatically displayed when a property is decorated with a custom attribute within the view model. The AutoComplete text box in action will look like the following:   The first thing to do is to do is visit my previous blog post to put the custom model metadata provider in place, this is necessary when using custom attributes on the view model. http://weblogs.asp.net/seanmcalinden/archive/2010/06/11/custom-asp-net-mvc-2-modelmetadataprovider-for-using-custom-view-model-attributes.aspx Once this is in place, make sure you visit the jQuery UI and download the latest stable release – in this example I’m using version 1.8.2. You can download it here. Add the jQuery scripts and css theme to your project and add references to them in your master page. Should look something like the following: Site.Master <head runat="server">     <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title>     <link href="../../Content/Site.css" rel="stylesheet" type="text/css" />     <link href="../../css/ui-lightness/jquery-ui-1.8.2.custom.css" rel="stylesheet" type="text/css" />     <script src="../../Scripts/jquery-1.4.2.min.js" type="text/javascript"></script>     <script src="../../Scripts/jquery-ui-1.8.2.custom.min.js" type="text/javascript"></script> </head> Once this is place we can get started. Creating the AutoComplete Custom Attribute The auto complete attribute will derive from the abstract MetadataAttribute created in my previous post. It will look like the following: AutoCompleteAttribute using System.Collections.Generic; using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Attributes {     public class AutoCompleteAttribute : MetadataAttribute     {         public RouteValueDictionary RouteValueDictionary;         public AutoCompleteAttribute(string controller, string action, string parameterName)         {             this.RouteValueDictionary = new RouteValueDictionary();             this.RouteValueDictionary.Add("Controller", controller);             this.RouteValueDictionary.Add("Action", action);             this.RouteValueDictionary.Add(parameterName, string.Empty);         }         public override void Process(ModelMetadata modelMetaData)         {             modelMetaData.AdditionalValues.Add("AutoCompleteUrlData", this.RouteValueDictionary);             modelMetaData.TemplateHint = "AutoComplete";         }     } } As you can see, the constructor takes in strings for the controller, action and parameter name. The parameter name will be used for passing the search text within the auto complete text box. The constructor then creates a new RouteValueDictionary which we will use later to construct the url for getting the auto complete results via ajax. The main interesting method is the method override called Process. With the process method, the route value dictionary is added to the modelMetaData AdditionalValues collection. The TemplateHint is also set to AutoComplete, this means that when the view model is parsed for display, the MVC 2 framework will look for a view user control template called AutoComplete, if it finds one, it uses that template to display the property. The View Model To show you how the attribute will look, this is the view model I have used in my example which can be downloaded at the end of this post. View Model using System.ComponentModel; using Mvc2Templates.Attributes; namespace Mvc2Templates.Models {     public class TemplateDemoViewModel     {         [AutoComplete("Home", "AutoCompleteResult", "searchText")]         [DisplayName("European Country Search")]         public string SearchText { get; set; }     } } As you can see, the auto complete attribute is called with the controller name, action name and the name of the action parameter that the search text will be passed into. The AutoComplete Template Now all of this is in place, it’s time to create the AutoComplete template. Create a ViewUserControl called AutoComplete.ascx at the following location within your application – Views/Shared/EditorTemplates/AutoComplete.ascx Add the following code: AutoComplete.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <%     var propertyName = ViewData.ModelMetadata.PropertyName;     var propertyValue = ViewData.ModelMetadata.Model;     var id = Guid.NewGuid().ToString();     RouteValueDictionary urlData =         (RouteValueDictionary)ViewData.ModelMetadata.AdditionalValues.Where(x => x.Key == "AutoCompleteUrlData").Single().Value;     var url = Mvc2Templates.Views.Shared.Helpers.RouteHelper.GetUrl(this.ViewContext.RequestContext, urlData); %> <input type="text" name="<%= propertyName %>" value="<%= propertyValue %>" id="<%= id %>" class="autoComplete" /> <script type="text/javascript">     $(function () {         $("#<%= id %>").autocomplete({             source: function (request, response) {                 $.ajax({                     url: "<%= url %>" + request.term,                     dataType: "json",                     success: function (data) {                         response(data);                     }                 });             },             minLength: 2         });     }); </script> There is a lot going on in here but when you break it down it’s quite simple. Firstly, the property name and property value are retrieved through the model meta data. These are required to ensure that the text box input has the correct name and data to allow for model binding. If you look at line 14 you can see them being used in the text box input creation. The interesting bit is on line 8 and 9, this is the code to retrieve the route value dictionary we added into the model metada via the custom attribute. Line 11 is used to create the url, in order to do this I created a quick helper class which looks like the code below titled RouteHelper. The last bit of script is the code to initialise the jQuery UI AutoComplete control with the correct url for calling back to our controller action. RouteHelper using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Views.Shared.Helpers {     public static class RouteHelper     {         const string Controller = "Controller";         const string Action = "Action";         const string ReplaceFormatString = "REPLACE{0}";         public static string GetUrl(RequestContext requestContext, RouteValueDictionary routeValueDictionary)         {             RouteValueDictionary urlData = new RouteValueDictionary();             UrlHelper urlHelper = new UrlHelper(requestContext);                          int i = 0;             foreach(var item in routeValueDictionary)             {                 if (item.Value == string.Empty)                 {                     i++;                     urlData.Add(item.Key, string.Format(ReplaceFormatString, i.ToString()));                 }                 else                 {                     urlData.Add(item.Key, item.Value);                 }             }             var url = urlHelper.RouteUrl(urlData);             for (int index = 1; index <= i; index++)             {                 url = url.Replace(string.Format(ReplaceFormatString, index.ToString()), string.Empty);             }             return url;         }     } } See it in action All you need to do to see it in action is pass a view model from your controller with the new AutoComplete attribute attached and call the following within your view: <%= this.Html.EditorForModel() %> NOTE: The jQuery UI auto complete control expects a JSON string returned from your controller action method… as you can’t use the JsonResult to perform GET requests, use a normal action result, convert your data into json and return it as a string via a ContentResult. If you download the solution it will be very clear how to handle the controller and action for this demo. The full source code for this post can be downloaded here. It has been developed using MVC 2 and Visual Studio 2010. As always, I hope this has been interesting/useful. Kind Regards, Sean McAlinden.

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Q1 2010 New Feature: Paging with RadGridView for Silverlight and WPF

    We are glad to announce that the Q1 2010 Release has added another weapon to RadGridViews growing arsenal of features. This is the brand new RadDataPager control which provides the user interface for paging through a collection of data. The good news is that RadDataPager can be used to page any collection. It does not depend on RadGridView in any way, so you will be free to use it with the rest of your ItemsControls if you chose to do so. Before you read on, you might want to download the samples solution that I have attached. It contains a sample project for every scenario that I will discuss later on. Looking at the code while reading will make things much easier for you. There is something for everyone among the 10 Visual Studio projects that are included in the solution. So go and grab it. I. Paging essentials The single most important piece of software concerning paging in Silverlight is the System.ComponentModel.IPagedCollectionView interface. Those of you who are on the WPF front need not worry though. As you might already know, Teleriks Silverlight and WPF controls is share the same code-base. Since WPF does not contain a similar interface, Telerik has provided its own Telerik.Windows.Data.IPagedCollectionView. The IPagedCollectionView interface contains several important members which are used by RadGridView to perform the actual paging. Silverlight provides a default implementation of this interface which, naturally, is called PagedCollectionView. You should definitely take a look at its source code in case you are interested in what is going on under the hood. But this is not a prerequisite for our discussion. The WPF default implementation of the interface is Teleriks QueryableCollectionView which, among many other interfaces, implements IPagedCollectionView. II. No Paging In order to gradually build up my case, I will start with a very simple example that lacks paging whatsoever. It might sound stupid, but this will help us build on top of this paging-devoid example. Let us imagine that we have the simplest possible scenario. That is a simple IEnumerable and an ItemsControl that shows its contents. This will look like this: No Paging IEnumerable itemsSource = Enumerable.Range(0, 1000); this.itemsControl.ItemsSource = itemsSource; XAML <Border Grid.Row="0" BorderBrush="Black" BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1" BorderBrush="Black" BorderThickness="1" Margin="5">     <TextBlock Text="No Paging"/> </Border> Nothing special for now. Just some data displayed in a ListBox. The two sample projects in the solution that I have attached are: NoPaging_WPF NoPaging_SL3 With every next sample those two project will evolve in some way or another. III. Paging simple collections The single most important property of RadDataPager is its Source property. This is where you pass in your collection of data for paging. More often than not your collection will not be an IPagedCollectionView. It will either be a simple List<T>, or an ObservableCollection<T>, or anything that is simply IEnumerable. Unless you had paging in mind when you designed your project, it is almost certain that your data source will not be pageable out of the box. So what are the options? III. 1. Wrapping the simple collection in an IPagedCollectionView If you look at the constructors of PagedCollectionView and QueryableCollectionView you will notice that you can pass in a simple IEnumerable as a parameter. Those two classes will wrap it and provide paging capabilities over your original data. In fact, this is what RadGridView does internally. It wraps your original collection in an QueryableCollectionView in order to easily perform many useful tasks such as filtering, sorting, and others, but in our case the most important one is paging. So let us start our series of examples with the most simplistic one. Imagine that you have a simple IEnumerable which is the source for an ItemsControl. Here is how to wrap it in order to enable paging: Silverlight IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new PagedCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; WPF IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new QueryableCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; XAML <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> This will do the trick. It is quite simple, isnt it? The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithWrapping_WPF PagingSimpleCollectionWithWrapping_SL3 III. 2. Binding to RadDataPager.PagedSource In case you do not like this approach there is a better one. When you assign an IEnumerable as the Source of a RadDataPager it will automatically wrap it in a QueryableCollectionView and expose it through its PagedSource property. From then on, you can attach any number of ItemsControls to the PagedSource and they will be automatically paged. Here is how to do this entirely in XAML: Using RadDataPager.PagedSource <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding PagedSource, ElementName=radDataPager}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding ItemsSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithPagedSource_WPF PagingSimpleCollectionWithPagedSource_SL3 IV. Paging collections implementing IPagedCollectionView Those of you who are using WCF RIA Services should feel very lucky. After a quick look with Reflector or the debugger we can see that the DomainDataSource.Data property is in fact an instance of the DomainDataSourceView class. This class implements a handful of useful interfaces: ICollectionView IEnumerable INotifyCollectionChanged IEditableCollectionView IPagedCollectionView INotifyPropertyChanged Luckily, IPagedCollectionView is among them which lets you do the whole paging in the server. So lets do this. We will add a DomainDataSource control to our page/window and connect the items control and the pager to it. Here is how to do this: MainPage <riaControls:DomainDataSource x:Name="invoicesDataSource"                               AutoLoad="True"                               QueryName="GetInvoicesQuery">     <riaControls:DomainDataSource.DomainContext>         <services:ChinookDomainContext/>     </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding Data, ElementName=invoicesDataSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Data, ElementName=invoicesDataSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> By the way, you can replace the ListBox from the above code snippet with any other ItemsControl. It can be RadGridView, it can be the MS DataGrid, you name it. Essentially, RadDataPager is sending paging commands to the the DomainDataSource.Data. It does not care who, what, or how many different controls are bound to this same Data property of the DomainDataSource control. So if you would like to experiment with this, you can throw in any number of other ItemsControls next to the ListBox, bind them in the same manner, and all of them will be paged by our single RadDataPager. Furthermore, you can throw in any number of RadDataPagers and bind them to the same property. Then when you page with any one of them will automatically update all of the rest. The whole picture is simply beautiful and we can do all of this thanks to WCF RIA Services. The two sample projects (Silverlight only) in the solution that I have attached are: PagingIPagedCollectionView PagingIPagedCollectionView.Web IV. Paging RadGridView While you can replace the ListBox in any of the above examples with a RadGridView, RadGridView offers something extra. Similar to the DomainDataSource.Data property, the RadGridView.Items collection implements the IPagedCollectionView interface. So you are already thinking: Then why not bind the Source property of RadDataPager to RadGridView.Items? Well thats exactly what you can do and you will start paging RadGridView out-of-the-box. It is as simple as that, no code-behind is involved: MainPage <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <telerikGrid:RadGridView Name="radGridView"                              ItemsSource="{Binding ItemsSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Items, ElementName=radGridView}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingRadGridView_SL3 PagingRadGridView_WPF With this last example I think I have covered every possible paging combination. In case you would like to see an example of something that I have not covered, please let me know. Also, make sure you check out those great online examples: WCF RIA Services with DomainDataSource Paging Configurator Endless Paging Paging Any Collection Paging RadGridView Happy Paging! Download Full Source Code Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409  | Next Page >