Search Results

Search found 5770 results on 231 pages for 'sense hofstede'.

Page 214/231 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • AudioTrack lag: obtainBuffer timed out

    - by BTR
    I'm playing WAVs on my Android phone by loading the file and feeding the bytes into AudioTrack.write() via the FileInputStream BufferedInputStream DataInputStream method. The audio plays fine and when it is, I can easily adjust sample rate, volume, etc on the fly with nice performance. However, it's taking about two full seconds for a track to start playing. I know AudioTrack has an inescapable delay, but this is ridiculous. Every time I play a track, I get this: 03-13 14:55:57.100: WARN/AudioTrack(3454): obtainBuffer timed out (is the CPU pegged?) 0x2e9348 user=00000960, server=00000000 03-13 14:55:57.340: WARN/AudioFlinger(72): write blocked for 233 msecs, 9 delayed writes, thread 0xba28 I've noticed that the delayed write count increases by one every time I play a track -- even across multiple sessions -- from the time the phone has been turned on. The block time is always 230 - 240ms, which makes sense considering a minimum buffer size of 9600 on this device (9600 / 44100). I've seen this message in countless searches on the Internet, but it usually seems to be related to not playing audio at all or skipping audio. In my case, it's just a delayed start. I'm running all my code in a high priority thread. Here's a truncated-yet-functional version of what I'm doing. This is the thread callback in my playback class. Again, this works (only playing 16-bit, 44.1kHz, stereo files right now), it just takes forever to start and has that obtainBuffer/delayed write message every time. public void run() { // Load file FileInputStream mFileInputStream; try { // mFile is instance of custom file class -- this is correct, // so don't sweat this line mFileInputStream = new FileInputStream(mFile.path()); } catch (FileNotFoundException e) {} BufferedInputStream mBufferedInputStream = new BufferedInputStream(mFileInputStream, mBufferLength); DataInputStream mDataInputStream = new DataInputStream(mBufferedInputStream); // Skip header try { if (mDataInputStream.available() > 44) mDataInputStream.skipBytes(44); } catch (IOException e) {} // Initialize device mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, ConfigManager.SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, ConfigManager.AUDIO_BUFFER_LENGTH, AudioTrack.MODE_STREAM); mAudioTrack.play(); // Initialize buffer byte[] mByteArray = new byte[mBufferLength]; int mBytesToWrite = 0; int mBytesWritten = 0; // Loop to keep thread running while (mRun) { // This flag is turned on when the user presses "play" while (mPlaying) { try { // Check if data is available if (mDataInputStream.available() > 0) { // Read data from file and write to audio device mBytesToWrite = mDataInputStream.read(mByteArray, 0, mBufferLength); mBytesWritten += mAudioTrack.write(mByteArray, 0, mBytesToWrite); } } catch (IOException e) { } } } } If I can get past the artificially long lag, I can easily deal with the inherit latency by starting my write at a later, predictable position (ie, skip past the minimum buffer length when I start playing a file).

    Read the article

  • How to calculate where bullet hits

    - by lkjoel
    I have been trying to write an FPS in C/X11/OpenGL, but the issue that I have encountered is with calculating where the bullet hits. I have used a horrible technique, and it only sometimes works: pos size, p; size.x = 0.1; size.z = 0.1; // Since the game is technically top-down (but in a 3D perspective) // Positions are in X/Z, no Y float f; // Counter float d = FIRE_MAX + 1 /* Shortest Distance */, d1 /* Distance being calculated */; x = 0; // Index of object to hit for (f = 0.0; f < FIRE_MAX; f += .01) { // Go forwards p.x = player->pos.x + f * sin(toRadians(player->rot.x)); p.z = player->pos.z - f * cos(toRadians(player->rot.x)); // Get all objects that collide with the current position of the bullet short* objs = _colDetectGetObjects(p, size, objects); for (i = 0; i < MAX_OBJECTS; i++) { if (objs[i] == -1) { continue; } // Check the distance between the object and the player d1 = sqrt( pow((objects[i].pos.x - player->pos.x), 2) + pow((objects[i].pos.z - player->pos.z), 2)); // If it's closer, set it as the object to hit if (d1 < d) { x = i; d = d1; } } // If there was an object, hit it if (x > 0) { hit(&objects[x], FIRE_DAMAGE, explosions, currtime); break; } } It just works by making a for-loop and calculating any objects that might collide with where the bullet currently is. This, of course, is very slow, and sometimes doesn't even work. What would be the preferred way to calculate where the bullet hits? I have thought of making a line and seeing if any objects collide with that line, but I have no idea how to do that kind of collision detection. EDIT: I guess my question is this: How do I calculate the nearest object colliding in a line (that might not be a straight 45/90 degree angle)? Or are there any simpler methods of calculating where the bullet hits? The bullet is sort of like a laser, in the sense that gravity does not affect it (writing an old-school game, so I don't want it to be too realistic)

    Read the article

  • C# recursive programming with lists

    - by David Torrey
    I am working on a program where each item can hold an array of items (i'm making a menu, which has a tree-like structure) currently i have the items as a list, instead of an array, but I don't feel like I'm using it to its full potential to simplify code. I chose a list over a standard array because the interface (.add, .remove, etc...) makes a lot of sense. I have code to search through the structure and return the path of the name (i.e. Item.subitem.subsubitem.subsubsubitem). Below is my code: public class Item { //public Item[] subitem; <-- Array of Items public List<Item> subitem; // <-- List of Items public Color itemColor = Color.FromArgb(50,50,200); public Rectangle itemSize = new Rectangle(0,0,64,64); public Bitmap itemBitmap = null; public string itemName; public string LocateItem(string searchName) { string tItemName = null; //if the item name matches the search parameter, send it up) if (itemName == searchName) { return itemName; } if (subitem != null) { //spiral down a level foreach (Item tSearchItem in subitem) { tItemName = tSearchItem.LocateItem(searchName); if (tItemName != null) break; //exit for if item was found } } //do name logic (use index numbers) //if LocateItem of the subitems returned nothing and the current item is not a match, return null (not found) if (tItemName == null && itemName != searchName) { return null; } //if it's not the item being searched for and the search item was found, change the string and return it up if (tItemName != null && itemName != searchName) { tItemName.Insert(0, itemName + "."); //insert the parent name on the left --> TopItem.SubItem.SubSubItem.SubSubSubItem return tItemName; } //default not found return null; } } My question is if there is an easier way to do this with lists? I've been going back and forth in my head as to whether I should use lists or just an array. The only reason I have a list is so that I don't have to make code to resize the array each time I add or remove an item.

    Read the article

  • Parameter pack argument consumption

    - by yuri kilochek
    It is possible to get the first element of the parameter pack like this template <typename... Elements> struct type_list { }; template <typename TypeList> struct type_list_first_element { }; template <typename FirstElement, typename... OtherElements> struct type_list_first_element<type_list<FirstElement, OtherElements...>> { typedef FirstElement type; }; int main() { typedef type_list<int, float, char> list; typedef type_list_first_element<list>::type element; return 0; } but not possible to similary get the last element like this template <typename... Elements> struct type_list { }; template <typename TypeList> struct type_list_last_element { }; template <typename LastElement, typename... OtherElements> struct type_list_last_element<type_list<OtherElements..., LastElement>> { typedef LastElement type; }; int main() { typedef type_list<int, float, char> list; typedef type_list_last_element<list>::type element; return 0; } with gcc 4.7.1 complaining: error: 'type' in 'struct type_list_last_element<type_list<int, float, char>>' does not name a type What paragraps from the standard describe this behaviour? It seems to me that template parameter packs are greedy in a sense that they consume all matching arguments, which in this case means that OtherElements consumes all three arguments (int, float and char) and then there is nothing left for LastElement so the compilation fails. Am i correct in the assumption? EDIT: To clarify: I am not asking how to extract the last element from the parameter pack, i know how to do that. What i actually want is to pick the pack apart from the back as opposed to the front, and as such recursing all the way to the back for each element would be ineffective. Apparentely reversing the sequence beforehand is the most sensible choice.

    Read the article

  • C++ - Error: expected unqualified-id before ‘using’

    - by Francisco P.
    Hello, everyone. I am having some trouble on a project I'm working on. Here's the header file for the calor class: #ifndef _CALOR_ #define _CALOR_ #include "gradiente.h" using namespace std; class Calor : public Gradiente { public: Calor(); Calor(int a); ~Calor(); int getTemp(); int getMinTemp(); void setTemp(int a); void setMinTemp(int a); void mostraSensor(); }; #endif When I try to compile it: calor.h|6|error: expected unqualified-id before ‘using’| Why does this happen? I've been searching online and learned this error occurs mostly due to corrupted included files. Makes no sense to me, though. This class inherits from gradiente: #ifndef _GRADIENTE_ #define _GRADIENTE_ #include "sensor.h" using namespace std; class Gradiente : public Sensor { protected: int vActual, vMin; public: Gradiente(); ~Gradiente(); } #endif Which in turn inherits from sensor #ifndef _SENSOR_ #define _SENSOR_ #include <iostream> #include <fstream> #include <string> #include "definicoes.h" using namespace std; class Sensor { protected: int tipo; int IDsensor; bool estadoAlerta; bool estadoActivo; static int numSensores; public: Sensor(/*PARAMETROS*/); Sensor(ifstream &); ~Sensor(); int getIDsensor(); bool getEstadoAlerta(); bool getEstadoActivo(); void setEstadoAlerta(int a); void setEstadoActivo(int a); virtual void guardaSensor(ofstream &); virtual void mostraSensor(); // FUNÇÃO COMUM /* virtual int funcaoComum() = 0; virtual int funcaoComum(){return 0;};*/ }; #endif For completeness' sake, here's definicoes.h #ifndef _DEFINICOES_ #define _DEFINICOES_ const unsigned int SENSOR_MOVIMENTO = 0; const unsigned int SENSOR_SOM = 1; const unsigned int SENSOR_PRESSAO = 2; const unsigned int SENSOR_CALOR = 3; const unsigned int SENSOR_CONTACTO = 4; const unsigned int MIN_MOVIMENTO = 10; const unsigned int MIN_SOM = 10; const unsigned int MIN_PRESSAO = 10; const unsigned int MIN_CALOR = 35; #endif Any help'd be much appreciated. Thank you for your time. Thanks for your time!

    Read the article

  • A Security (encryption) Dilemma

    - by TravisPUK
    I have an internal WPF client application that accesses a database. The application is a central resource for a Support team and as such includes Remote Access/Login information for clients. At the moment this database is not available via a web interface etc, but one day is likely to. The remote access information includes the username and passwords for the client's networks so that our client's software applications can be remotely supported by us. I need to store the usernames and passwords in the database and provide the support consultants access to them so that they can login to the client's system and then provide support. Hope this is making sense. So the dilemma is that I don't want to store the usernames and passwords in cleartext on the database to ensure that if the DB was ever compromised, I am not then providing access to our client's networks to whomever gets the database. I have looked at two-way encryption of the passwords, but as they say, two-way is not much different to cleartext as if you can decrypt it, so can an attacker... eventually. The problem here is that I have setup a method to use a salt and a passcode that are stored in the application, I have used a salt that is stored in the db, but all have their weaknesses, ie if the app was reflected it exposes the salts etc. How can I secure the usernames and passwords in my database, and yet still provide the ability for my support consultants to view the information in the application so they can use it to login? This is obviously different to storing user's passwords as these are one way because I don't need to know what they are. But I do need to know what the client's remote access passwords are as we need to enter them in at the time of remoting to them. Anybody have some theories on what would be the best approach here? update The function I am trying to build is for our CRM application that will store the remote access details for the client. The CRM system provides call/issue tracking functionality and during the course of investigating the issue, the support consultant will need to remote in. They will then view the client's remote access details and make the connection

    Read the article

  • Is there a way to apply a CSS class from within a style?

    - by zashu
    I'm trying to be more modular in my CSS style sheets and was wondering if there is some feature like an include or apply that allows the author to apply a set of styles dynamically. Since I am having a hard time wording the question, perhaps an example will make more sense. Let's say, for example, I have the following CSS: .red {color:#e00b0b} #footer a {font-size:0.8em} h2 {font-size:1.4em; font-weight:bold;} In my page, let's say that I want both the footer links and h2 elements to use the special red color (there may be other locations I would like to use it as well). Ideally, I would like to do something like the following: .red {color:#e00b0b} #footer a {font-size:0.8em; apply-class:".red";} h2 {font-size:1.4em; font-weight:bold; apply-class:".red";} To me, this feels "modular" in a way because I can make modifications to the .red class without having to worry so much about where it is used, and other locations can use the styles in that class without worrying about, specifically, what they are. I understand that I have the following options and have included why, in my fairly inexperienced opinion, they are less-than-perfect: Add the color property to every element I want to be that color. Not ideal because, if I change the color, I have to update every rule to match the new color. Add the red class to every element I want to be red. Not ideal because it means that my HTML is dictating presentation. Create an additional rule that selects every element I want to be red and apply the color property to that. Not ideal because it is harder to find all of the rules that style a specific element, making maintenance more of a challenge Maybe I'm just being an ass and the following options are the only options and I should stick with them. I'm wondering, however, if the "ideal" (well, my ideal) method exists and, if so, what is the proper syntax? If it doesn't exist, option 3 above seems like my best bet. However, I would like to get confirmation.

    Read the article

  • Users and roles in context

    - by Eric W.
    I'm trying to get a sense of how to implement the user/role relationships for an application I'm writing. The persistence layer is Google App Engine's datastore, which places some interesting (but generally beneficial) constraints on what can be done. Any thoughts are appreciated. It might be helpful to keep things very concrete. I would like there to be organizations, users, test content and test administrations (records of tests that have been taken). A user can have the role of participant (test-taker), contributor of test material or both. A user can also be a member of zero or more organizations. In the role of participant, the user can see the previous administrations of tests he or she has taken. The user can also see a test administration of another participant if that participant has given the user authorization. The user can see test material that has been made public, and he or she can see restricted content as a participant during a specific administration of a test for which that user has been authorized by an organization. As a member of an organization, the user can see restricted content in the role of contributor, and he or she might or might not also be able to edit the content. Each organization should have one or more administrators that can determine whether a member can see and edit content and determine who has admin privileges. There should also be one or more application-wide superusers that can troubleshoot and solve problems. Members of organizations can see the administrations of tests that the participants concerned have authorized them to see, and they can see anonymous data if no authorization has been given. A user cannot see the test results of another user in any other circumstances. Since there are no joins in the App Engine datastore, it might be necessary to have things less normalized than usual for the typical SQL database in order to ensure that queries that check permissions are fast (e.g., ones that determine whether a link is to be displayed). My questions are: How do I move forward on this? Should I spend a lot of time up front in order to get the model right, or can I iterate several times and gradually roll in additional complexity? Does anyone have some general ideas about how to break things up in this instance? Are there any GAE libraries that handle roles in a way that is compatible with this arrangement?

    Read the article

  • jQuery center multiple dynamic images in different sized containers

    - by JVK Design
    So I've found similar questions but none that answer all the questions I have and I know there must be a simple jQuery answer to this. I've got multiple images that are being dynamically placed in their own containing div that have overflow:hidden, they need to fill their containing divs and be centered(horizontally and vertically) also. The containing divs will be different sizes as well. So in short: multiple different sized images fill and center in containing div. containing divs will be different sizes. will be used multiple times on a page. Hopefully this image helps explain what I'm after. Click here to view the image. HTML I'm using but can be changed <div class="imageHolder"> <div class="first SlideImage"> <img src="..." alt="..."/> </div> <div class="second SlideImage"> <img src="..." alt="..."/> </div> <div class="third SlideImage"> <img src="..." alt="..."/> </div> </div> And the CSS .imageHandler{ float:left; width:764px; height:70px; margin:1px 0px 0px; } .imageHolder .SlideImage{ float:left; position:relative; overflow:hidden; } .imageHolder .SlideImage img{ position:absolute; } .imageHolder .first.SlideImage{ width:381px; height:339px; margin-right:1px; } .imageHolder .second.SlideImage{margin-bottom:1px;} .imageHolder .second.SlideImage, .imageHolder .third.SlideImage { width: 382px; height: 169px; } Ask me anything if this doesn't make sense, thanks in advance

    Read the article

  • Storing an Image with php?

    - by Chris
    I'm trying to store an Image in my website so I can use it easily but I found this php code from here and I can't quite make much sense of it.. I'm just starting php and I dont quite know what to change and what to keep.. I'd greatly appreciate it if you could explain this a little better for me, thanks. <?php $allowedExts = array("jpg", "jpeg", "gif", "png"); $extension = end(explode(".", $_FILES["file"]["name"])); if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/png") || ($_FILES["file"]["type"] == "image/pjpeg")) && ($_FILES["file"]["size"] < 20000) && in_array($extension, $allowedExts)) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />"; if (file_exists("upload/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "upload/" . $_FILES["file"]["name"]); echo "Stored in: " . "upload/" . $_FILES["file"]["name"]; } } } else { echo "Invalid file"; } ?>

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • Android - creating a custom preferences activity screen

    - by Bill Osuch
    Android applications can maintain their own internal preferences (and allow them to be modified by users) with very little coding. In fact, you don't even need to write an code to explicitly save these preferences, it's all handled automatically! Create a new Android project, with an intial activity title Main. Create two more activities: ShowPrefs, which extends Activity Set Prefs, which extends PreferenceActivity Add these two to your AndroidManifest.xml file: <activity android:name=".SetPrefs"></activity> <activity android:name=".ShowPrefs"></activity> Now we'll work on fleshing out each activity. First, open up the main.xml layout file and add a couple of buttons to it: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"    android:orientation="vertical"    android:layout_width="fill_parent"    android:layout_height="fill_parent"> <Button android:text="Edit Preferences"    android:id="@+id/prefButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> <Button android:text="Show Preferences"    android:id="@+id/showButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> </LinearLayout> Next, create a couple button listeners in Main.java to handle the clicks and start the other activities: Button editPrefs = (Button) findViewById(R.id.prefButton);       editPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), SetPrefs.class);                  startActivityForResult(myIntent, 0);              }      });           Button showPrefs = (Button) findViewById(R.id.showButton);      showPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), ShowPrefs.class);                  startActivityForResult(myIntent, 0);              }      }); Now, we'll create the actual preferences layout. You'll need to create a file called preferences.xml inside res/xml, and you'll likely have to create the xml directory as well. Add the following xml: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> </PreferenceScreen> First we'll add a category, which is just a way to group similar preferences... sort of a horizontal bar. Add this inside the PreferenceScreen tags: <PreferenceCategory android:title="First Category"> </PreferenceCategory> Now add a Checkbox and an Edittext box (inside the PreferenceCategory tags): <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/> The key is how you will refer to the preference in code, the title is the large text that will be displayed, and the summary is the smaller text (this will make sense when you see it). Let's say we've got a second group of preferences that apply to a different part of the app. Add a new category just below the first one: <PreferenceCategory android:title="Second Category"> </PreferenceCategory> In there we'll a list with radio buttons, so add: <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" /> When complete, your full xml file should look like this: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android">  <PreferenceCategory android:title="First Category"> <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/>  </PreferenceCategory>  <PreferenceCategory android:title="Second Category">   <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" />  </PreferenceCategory> </PreferenceScreen> However, when you try to save it, you'll get an error because you're missing your array definition. To fix this, add a file called arrays.xml in res/values, and paste in the following: <?xml version="1.0" encoding="utf-8"?> <resources>  <string-array name="listArray">      <item>Value 1</item>      <item>Value 2</item>      <item>Value 3</item>  </string-array>  <string-array name="listValues">      <item>1</item>      <item>2</item>      <item>3</item>  </string-array> </resources> Finally (for the preferences screen at least...) add the code that will display the preferences layout to the SetPrefs.java file:  @Override     public void onCreate(Bundle savedInstanceState) {      super.onCreate(savedInstanceState);      addPreferencesFromResource(R.xml.preferences);      } OK, so now we've got an activity that will set preferences, and save them without the need to write custom save code. Let's throw together an activity to work with the saved preferences. Create a new layout called showpreferences.xml and give it three Textviews: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"     android:orientation="vertical"     android:layout_width="fill_parent"     android:layout_height="fill_parent"> <TextView   android:id="@+id/textview1"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview1"/> <TextView   android:id="@+id/textview2"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview2"/> <TextView   android:id="@+id/textview3"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview3"/> </LinearLayout> Open up the ShowPrefs.java file and have it use that layout: setContentView(R.layout.showpreferences); Then add the following code to load the DefaultSharedPreferences and display them: SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this);    TextView text1 = (TextView)findViewById(R.id.textview1); TextView text2 = (TextView)findViewById(R.id.textview2); TextView text3 = (TextView)findViewById(R.id.textview3);    text1.setText(new Boolean(prefs.getBoolean("checkboxPref", false)).toString()); text2.setText(prefs.getString("editTextPref", "<unset>"));; text3.setText(prefs.getString("listPref", "<unset>")); Fire up the application in the emulator and click the Edit Preferences button. Set various things, click the back button, then the Edit Preferences button again. Notice that your choices have been saved.   Now click the Show Preferences button, and you should see the results of what you set:   There are two more preference types that I did not include here: RingtonePreference - shows a radioGroup that lists your ringtones PreferenceScreen - allows you to embed a second preference screen inside the first - it opens up a new set of preferences when clicked

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Using Unity – Part 1

    - by nmarun
    I have been going through implementing some IoC pattern using Unity and so I decided to share my learnings (I know that’s not an English word, but you get the point). Ok, so I have an ASP.net project named ProductWeb and a class library called ProductModel. In the model library, I have a class called Product: 1: public class Product 2: { 3: public string Name { get; set; } 4: public string Description { get; set; } 5:  6: public Product() 7: { 8: Name = "iPad"; 9: Description = "Not just a reader!"; 10: } 11:  12: public string WriteProductDetails() 13: { 14: return string.Format("Name: {0} Description: {1}", Name, Description); 15: } 16: } In the Page_Load event of the default.aspx, I’ll need something like: 1: Product product = new Product(); 2: productDetailsLabel.Text = product.WriteProductDetails(); Now, let’s go ‘Unity’fy this application. I assume you have all the bits for the pattern. If not, get it from here. I found this schematic representation of Unity pattern from the above link. This image might not make much sense to you now, but as we proceed, things will get better. The first step to implement the Inversion of Control pattern is to create interfaces that your types will implement. An IProduct interface is added to the ProductModel project. 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } Let’s make our Product class to implement the IProduct interface. The application will compile and run as before despite the changes made. Add the following references to your web project: Microsoft.Practices.Unity Microsoft.Practices.Unity.Configuration Microsoft.Practices.Unity.StaticFactory Microsoft.Practices.ObjectBuilder2 We need to add a few lines to the web.config file. The line below tells what version of Unity pattern we’ll be using. 1: <configSections> 2: <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> 3: </configSections> Add another block with the same name as the section name declared above – ‘unity’. 1: <unity> 2: <typeAliases> 3: <!--Custom object types--> 4: <typeAlias alias="IProduct" type="ProductModel.IProduct, ProductModel"/> 5: <typeAlias alias="Product" type="ProductModel.Product, ProductModel"/> 6: </typeAliases> 7: <containers> 8: <container name="unityContainer"> 9: <types> 10: <type type="IProduct" mapTo="Product"/> 11: </types> 12: </container> 13: </containers> 14: </unity> From the Unity Configuration schematic shown above, you see that the ‘unity’ block has a ‘typeAliases’ and a ‘containers’ segment. The typeAlias element gives a ‘short-name’ for a type. This ‘short-name’ can be used to point to this type any where in the configuration file (web.config in our case, but all this information could be coming from an external xml file as well). The container element holds all the mapping information. This container is referenced through its name attribute in the code and you can have multiple of these container elements in the containers segment. The ‘type’ element in line 10 basically says: ‘When Unity requests to resolve the alias IProduct, return an instance of whatever the short-name of Product points to’. This is the most basic piece of Unity pattern and all of this is accomplished purely through configuration. So, in future you have a change in your model, all you need to do is - implement IProduct on the new model class and - either add a typeAlias for the new type and point the mapTo attribute to the new alias declared - or modify the mapTo attribute of the type element to point to the new alias (as the case may be). Now for the calling code. It’s a good idea to store your unity container details in the Application cache, as this is rarely bound to change and also adds for better performance. The Global.asax.cs file comes for our rescue: 1: protected void Application_Start(object sender, EventArgs e) 2: { 3: // create and populate a new Unity container from configuration 4: IUnityContainer unityContainer = new UnityContainer(); 5: UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); 6: section.Containers["unityContainer"].Configure(unityContainer); 7: Application["UnityContainer"] = unityContainer; 8: } 9:  10: protected void Application_End(object sender, EventArgs e) 11: { 12: Application["UnityContainer"] = null; 13: } All this says is: create an instance of UnityContainer() and read the ‘unity’ section from the configSections segment of the web.config file. Then get the container named ‘unityContainer’ and store it in the Application cache. In my code-behind file, I’ll make use of this UnityContainer to create an instance of the Product type. 1: public partial class _Default : Page 2: { 3: private IUnityContainer unityContainer; 4: protected void Page_Load(object sender, EventArgs e) 5: { 6: unityContainer = Application["UnityContainer"] as IUnityContainer; 7: if (unityContainer == null) 8: { 9: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 10: } 11: else 12: { 13: IProduct productInstance = unityContainer.Resolve<IProduct>(); 14: productDetailsLabel.Text = productInstance.WriteProductDetails(); 15: } 16: } 17: } Looking the ‘else’ block, I’m asking the unityContainer object to resolve the IProduct type. All this does, is to look at the matching type in the container, read its mapTo attribute value, get the full name from the alias and create an instance of the Product class. Fabulous!! I’ll go more in detail in the next blog. The code for this blog can be found here.

    Read the article

  • Mulit-tenant ASP.NET MVC – Controllers

    - by zowens
    Part I – Introduction Part II – Foundation   The time has come to talk about controllers in a multi-tenant ASP.NET MVC architecture. This is actually the most critical design decision you will make when dealing with multi-tenancy with MVC. In my design, I took into account the design goals I mentioned in the introduction about inversion of control and what a tenant is to my design. Be aware that this is only one way to achieve multi-tenant controllers.   The Premise MvcEx (which is a sample written by Rob Ashton) utilizes dynamic controllers. Essentially a controller is “dynamic” in that multiple action results can be placed in different “controllers” with the same name. This approach is a bit too complicated for my design. I wanted to stick with plain old inheritance when dealing with controllers. The basic premise of my controller design is that my main host defines a set of universal controllers. It is the responsibility of the tenant to decide if the tenant would like to utilize these core controllers. This can be done either by straight usage of the controller or inheritance for extension of the functionality defined by the controller. The controller is resolved by a StructureMap container that is attached to the tenant, as discussed in Part II.   Controller Resolution I have been thinking about two different ways to resolve controllers with StructureMap. One way is to use named instances. This is a really easy way to simply pull the controller right out of the container without a lot of fuss. I ultimately chose not to use this approach. The reason for this decision is to ensure that the controllers are named properly. If a controller has a different named instance that the controller type, then the resolution has a significant disconnect and there are no guarantees. The final approach, the one utilized by the sample, is to simply pull all controller types and correlate the type with a controller name. This has a bit of a application start performance disadvantage, but is significantly more approachable for maintainability. For example, if I wanted to go back and add a “ControllerName” attribute, I would just have to change the ControllerFactory to suit my needs.   The Code The container factory that I have built is actually pretty simple. That’s really all we need. The most significant method is the GetControllersFor method. This method makes the model from the Container and determines all the concrete types for IController.  The thing you might notice is that this doesn’t depend on tenants, but rather containers. You could easily use this controller factory for an application that doesn’t utilize multi-tenancy. public class ContainerControllerFactory : IControllerFactory { private readonly ThreadSafeDictionary<IContainer, IDictionary<string, Type>> typeCache; public ContainerControllerFactory(IContainerResolver resolver) { Ensure.Argument.NotNull(resolver, "resolver"); this.ContainerResolver = resolver; this.typeCache = new ThreadSafeDictionary<IContainer, IDictionary<string, Type>>(); } public IContainerResolver ContainerResolver { get; private set; } public virtual IController CreateController(RequestContext requestContext, string controllerName) { var controllerType = this.GetControllerType(requestContext, controllerName); if (controllerType == null) return null; var controller = this.ContainerResolver.Resolve(requestContext).GetInstance(controllerType) as IController; // ensure the action invoker is a ContainerControllerActionInvoker if (controller != null && controller is Controller && !((controller as Controller).ActionInvoker is ContainerControllerActionInvoker)) (controller as Controller).ActionInvoker = new ContainerControllerActionInvoker(this.ContainerResolver); return controller; } public void ReleaseController(IController controller) { if (controller != null && controller is IDisposable) ((IDisposable)controller).Dispose(); } internal static IEnumerable<Type> GetControllersFor(IContainer container) { Ensure.Argument.NotNull(container); return container.Model.InstancesOf<IController>().Select(x => x.ConcreteType).Distinct(); } protected virtual Type GetControllerType(RequestContext requestContext, string controllerName) { Ensure.Argument.NotNull(requestContext, "requestContext"); Ensure.Argument.NotNullOrEmpty(controllerName, "controllerName"); var container = this.ContainerResolver.Resolve(requestContext); var typeDictionary = this.typeCache.GetOrAdd(container, () => GetControllersFor(container).ToDictionary(x => ControllerFriendlyName(x.Name))); Type found = null; if (typeDictionary.TryGetValue(ControllerFriendlyName(controllerName), out found)) return found; return null; } private static string ControllerFriendlyName(string value) { return (value ?? string.Empty).ToLowerInvariant().Without("controller"); } } One thing to note about my implementation is that we do not use namespaces that can be utilized in the default ASP.NET MVC controller factory. This is something that I don’t use and have no desire to implement and test. The reason I am not using namespaces in this situation is because each tenant has its own namespaces and the routing would not make sense in this case.   Because we are using IoC, dependencies are automatically injected into the constructor. For example, a tenant container could implement it’s own IRepository and a controller could be defined in the “main” project. The IRepository from the tenant would be injected into the main project’s controller. This is quite a useful feature.   Again, the source code is on GitHub here.   Up Next Up next is the view resolution. This is a complicated issue, so be prepared. I hope that you have found this series useful. If you have any questions about my implementation so far, send me an email or DM me on Twitter. I have had a lot of great conversations about multi-tenancy so far and I greatly appreciate the feedback!

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • 2 Days of Share &amp; Point

    - by Mark Rackley
    Groovy man… SharePoint Saturday Ozarks is back for 2010, bigger and better than before. Join us for a far out time and learn more about SharePoint in one day than you could in a year from the man… Yes! SharePoint Saturday Ozarks is back! SharePoint Saturday Ozarks is the largest SharePoint conference in Arkansas, Southern Missouri, and the very north east tip of Oklahoma. Last year we had a great turn out with 20 speakers, 5 MVPs, and attendees coming from Arkansas, Texas, Oklahoma, Missouri, Kansas, Nebraska, Indiana, Ohio, Alabama, Michigan, and Washington. Hey Man… what’s SharePoint Saturday anyway? Sounds like a conspiracy man… Not to worry, SharePoint Saturday is not an arm of the government bent on mind control or any attempt what-so-ever to bring you down man. SharePoint Saturday is grass roots effort started by Michael Lotter (http://www.sharepointsaturday.org/pages/about.aspx). It is a FREE one day event where the best SharePoint speakers gather to present their love, hatred, and frustrations of SharePoint to those lucky individuals who attend. Lessons are learned, contacts are made, prizes are won, food is eaten, assorted beverages are consumed until wee hours of the morning. SharePoint Saturday started with just a few sporadic one day events here and there. However, over the past year SharePoint Saturday has exploded and it’s hard to find a weekend where there is NOT a SharePoint Saturday event happing in some corner of the globe. There are even occasions where there are two SharePoint Saturdays on the same day! Many people are pleasantly surprised at the caliber of speakers at these SharePoint Saturday events. For the most part, these speakers are more eloquent, practiced, and practical than those speakers you find at the major multi-day conferences. These guys aren’t even paid to speak.. they do it out of love man… SharePoint Saturday Ozarks 2009 Alumni We had a star studded cast last year with many returning this year! Just check out the fun that they had… John Ferringer – Admin rockstar… I can still sense the awesomeness   SharePoint poster children Mike Watson & Laura Rogers     Lori Gowin spreading the SharePoint Love Eric Shupps is a little bit country and a little bit rock and roll       Cathy Dew, Sean McDonough, and JD Wade relaxing between gigs Actually, you can see real photos from last year’s SharePoint Saturday ozarks here:  picasaweb.google.com/mrackley/SharePointSaturdayOzarks#    What’s new for SharePoint Saturday Ozarks 2010 SharePoint Saturday Ozarks 2010 will totally blow your mind man. We’re getting the band back to together with many returning speakers and few new faces. Joel Oleson will be speaking this year, maybe he’ll grace us with his song stylings. Sadly, once again, Andrew Connell will not be able to attend SharePoint Saturday Ozarks, however he did feel the need to show his support in his own way. Prizes this year currently include books, software, a Zune HD, and much more! Wait Man… You said 2 days? I thought it was a one day event? Correct you are my herbal smelling friend… SharePoint Saturday Ozarks 2010 will spread the love an additional day this year. The first day will be all about the SharePoint love, on day 2 we will be taking a leisurely float down the Buffalo National River for those interested in a truly unique experience (no banjos allowed please).   Here are the details: WHAT 4 – 5 hour float down the Buffalo National River WHEN & WHERE Sunday June 13th. We will be leaving at 10am from the Parking Lot of: Gordon’s Motel & Canoe Rental Old Highway 7 Jasper, AR 72641 (870) 446-5252 Jasper is about 30 minutes south of Harrison, AR on Highway 7 South. You are responsible for bumming a ride to/from Gordon’s Motel, but they will be shuttling us to/from the river and providing canoes and a boxed lunch. WHAT ELSE? The float trip is dependent on the weather of course, we won’t be floating down the river in a thunderstorm, however I planned SPS Ozarks around a time of year ideal for floating. We aren’t talking class 5 rapids here, you don’t need any real skill, but you need to be okay with possibly tipping your canoe over once or twice. You can bring your own assorted beverages with you, but glass containers are not allowed on the river. I suggest a small cooler with extra snacks and drinks. Also bring clothing you can get wet in (these SharePoint people can get ornery). HOW DO I SIGN UP? When you register for SharePoint Saturday Ozarks, you will have the option to also sign up for the float trip. Seats are limited though! If you do not intend to go, please do not take someone else’s place.  The cost for the float trip will be about $35 dollars per person (which you are responsible for unless we find a sponsor). The price includes shuttle to/from river, canoe, life jackets, paddles, and boxed lunch. Far out man… how do I register??? You can register for SharePoint Saturday Ozarks by going to http://spsozarks.eventbrite.com/ We are limited to 200 people for the conference and 50 people for the float trip, so register today before we are sold out. Lodging for SharePoint Saturday Ozarks will once again take place at the Hotel Seville: Annex Suites are available for $103.20 This is So Groovy.. How can I help? I’m glad you asked! We are still looking for a few sponsors and one or two more speakers. If you are interested please let me know!  You can find out more information at http://www.sharepointsaturday.org/ozarks Hey… wait a minute…. what exactly IS SharePoint man??? Come to SharePoint Saturday Ozarks and find out!!  See you guys there!

    Read the article

  • Getting Started with ASP.NET Membership, Profile and RoleManager

    - by Ben Griswold
    A new ASP.NET MVC project includes preconfigured Membership, Profile and RoleManager providers right out of the box.  Try it yourself – create a ASP.NET MVC application, crack open the web.config file and have a look.  First, you’ll find the ApplicationServices database connection: <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   Notice the connection string is referencing the aspnetdb.mdf database hosted by SQL Express and it’s using integrated security so it’ll just work for you without having to call out a specific database login or anything. Scroll down the file a bit and you’ll find each of the three noted sections: <membership>   <providers>     <clear/>     <add name="AspNetSqlMembershipProvider"          type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          enablePasswordRetrieval="false"          enablePasswordReset="true"          requiresQuestionAndAnswer="false"          requiresUniqueEmail="false"          passwordFormat="Hashed"          maxInvalidPasswordAttempts="5"          minRequiredPasswordLength="6"          minRequiredNonalphanumericCharacters="0"          passwordAttemptWindow="10"          passwordStrengthRegularExpression=""          applicationName="/"             />   </providers> </membership>   <profile>   <providers>     <clear/>     <add name="AspNetSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          applicationName="/"             />   </providers> </profile>   <roleManager enabled="false">   <providers>     <clear />     <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />   </providers> </roleManager> Really. It’s all there. Still don’t believe me.  Run the application, walk through the registration process and finally login and logout.  Completely functional – and you didn’t have to do a thing! What else?  Well, you can manage your users via the Configuration Manager which is hiding in Visual Studio behind Projects > ASP.NET Configuration. The ASP.NET Web Site Administration Tool isn’t MVC-specific (neither is the Membership, Profile or RoleManager stuff) but it’s neat and I hardly ever see anyone using it.  Here you can set up and edit users, roles, and set access permissions for your site. You can manage application settings, establish your SMTP settings, configure debugging and tracing, define default error page and even take your application offline.  The UI is rather plain-Jane but it works great. And here’s the best of all.  Let’s say you, like most of us, don’t want to run your application on top of the aspnetdb.mdf database.  Let’s suppose you want to use your own database and you’d like to add the membership stuff to it.  Well, that’s easy enough. Take a look inside your [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\ folder.  Here you’ll find a bunch of files.  If you were to run the InstallCommon.sql, InstallMembership.sql, InstallRoles.sql and InstallProfile.sql files against the database of your choices, you’d be installing the same membership, profile and role artifacts which are found in the aspnet.db to your own database.  Too much trouble?  Okay. Run [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\aspnet_regsql.exe from the command line instead.  This will launch the ASP.NET SQL Server Setup Wizard which walks you through the installation of those same database objects into the new or existing database of your choice. You may not always have the luxury of using this tool on your destination server, but you should use it whenever you can.  Last tip: don’t forget to update the ApplicationServices connectionstring to point to your custom database after the setup is complete. At the risk of sounding like a smarty, everything I’ve mentioned in this post has been around for quite a while. The thing is that not everyone has had the opportunity to use it.  And it makes sense. I know I’ve worked on projects which used custom membership services.  Why bother with the out-of-the-box stuff, right?   And the .NET framework is so massive, who can know it all. Well, eventually you might have a chance to architect your own solution using any implementation you’d like or you will have the time to play around with another aspect of the framework.  When you do, think back to this post.

    Read the article

  • Scott Guthrie in Glasgow

    - by Martin Hinshelwood
    Last week Scott Guthrie was in Glasgow for his new Guathon tour, which was a roaring success. Scott did talks on the new features in Visual Studio 2010, Silverlight 4, ASP.NET MVC 2 and Windows Phone 7. Scott talked from 10am till 4pm, so this can only contain what I remember and I am sure lots of things he discussed just went in one ear and out another, however I have tried to capture at least all of my Ohh’s and Ahh’s. Visual Studio 2010 Right now you can download and install Visual Studio 2010 Candidate Release, but soon we will have the final product in our hands. With it there are some amazing improvements, and not just in the IDE. New versions of VB and C# come out of the box as well as Silverlight 4 and SharePoint 2010 integration. The new Intellisense features allow inline support for Types and Dictionaries as well as being able to type just part of a name and have the list filter accordingly. Even better, and my personal favourite is one that Scott did not mention, and that is that it is not case sensitive so I can actually find things in C# with its reasonless case sensitivity (Scott, can we please have an option to turn that off.) Another nice feature is the Routing engine that was created for ASP.NET MVC is now available for WebForms which is good news for all those that just imported the MVC DLL’s to get at it anyway. Another fantastic feature that will need some exploring is the ability to add validation rules to your entities and have them validated automatically on the front end. This removes the need to add your own validators and means that you can control an objects validation rules from a single location, the object. A simple command “GridView.EnableDynamicData(gettype(product))“ will enable this feature on controls. What was not clear was wither there would be support for this in WPF and WinForms as well. If there is, we can write our validation rules once and use everywhere. I was disappointed to here that there would be no inbuilt support for the Dynamic Language Runtime (DLR) with VS2010, but I think it will be there for .vNext. Because I have been concentrating on the Visual Studio ALM enhancements to VS2010 I found this section invaluable as I now know at least some of what I missed. Silverlight 4 I am not a big fan of Silverlight. There I said it, and I will probably get lynched for it. My big problem with Silverlight is that most of the really useful things I leaned from WPF do not work. I am only going to mention one thing and that is “x:Type”. If you are a WPF developer you will know how much power these 6 little letters provide; the ability to target templates at object types being the the most magical and useful. But, and this is a massive but, if you are developing applications that MUST run on platforms other than windows then Silverlight is your only choice (well that and Flash, but lets just not go there). And Silverlight has a huge install base as well.. 60% of all internet connected devices have Silverlight. Can Adobe say that? Even though I am not a fan of it my current project is a Silverlight one. If you start your XAML experience with Silverlight you will not be disappointed and neither will the users of the applications you build. Scott showed us a fantastic application called “Silverface” that is a Silverlight 4 Out of Browser application. I have looked for a link and can’t find one, but true to form, here is a fantastic WPF version called Fish Bowl from Microsoft. ASP.NET MVC 2 ASP.NET MVC is something I have played with but never used in anger. It is definitely the way forward, but WebForms is not dead yet. there are still circumstances when WebForms are better. If you are starting from greenfield and you are using TDD, then MVC is ultimately the only way you can go. New in version 2 are Dynamic Scaffolding helpers that let you control how data is presented in the UI from the Entities. Adding validation rules and other options that make sense there can help improve the overall ease of developing the UI. Also the Microsoft team have heard the cries of help from the larger site builders and provided “Areas” which allow a level of categorisation to your Controllers and Views. These work just like add-ins and have their own folder, but also have sub Controllers and Views. Areas are totally pluggable and can be dropped onto existing sites giving the ability to have boxed products in MVC, although what you do with all of those views is anyone's guess. They have been listening to everyone again with the new option to encapsulate UI using the Html.Action or Html.ActionRender. This uses the existing  .ascx functionality in ASP.NET to render partial views to the screen in certain areas. While this was possible before, it makes the method official thereby opening it up to the masses and making it a standard. At the end of the session Scott pulled out some IIS goodies including the IIS SEO Toolkit which can be used to verify your own site is “good” for search engine consumption. Better yet he suggested that you run it against your friends sites and shame them with how bad they are. note: make sure you have fixed yours first. Windows Phone 7 Series I had already seen the new UI for WP7 and heard about the developer story, but Scott brought that home by building a twitter application in about 20 minutes using the emulator. Scott’s only mistake was loading @plip’s tweets into the app… And guess what, it was written in Silverlight. When Windows Phone 7 launches you will be able to use about 90% of the codebase of your existing Silverlight application and use it on the phone! There are two downsides to the new WP7 architecture: No, your existing application WILL NOT work without being converted to either a Silverlight or XNA UI. NO, you will not be able to get your applications onto the phone any other way but through the Marketplace. Do I think these are problems? No, not even slightly. This phone is aimed at consumers who have probably never tried to install an application directly onto a device. There will be support for enterprise apps in the future, but for now enterprises should stay on Windows Phone 6.5.x devices. Post Event drinks At the after event drinks gathering Scott was checking out my HTC HD2 (released to the US this month on T-Mobile) and liked the Windows Phone 6.5.5 build I have on it. We discussed why Microsoft were not going to allow Windows Phone 7 Series onto it with my understanding being that it had 5 buttons and not 3, while Scott was sure that there was more to it from a hardware standpoint. I think he is right, and although the HTC HD2 has a DX9 compatible processor, it was never built with WP7 in mind. However, as if by magic Saturday brought fantastic news for all those that have already bought an HD2: Yes, this appears to be Windows Phone 7 running on a HTC HD2. The HD2 itself won't be getting an official upgrade to Windows Phone 7 Series, so all eyes are on the ROM chefs at the moment. The rather massive photos have been posted by Tom Codon on HTCPedia and they've apparently got WiFi, GPS, Bluetooth and other bits working. The ROM isn't online yet but according to the post there's a beta version coming soon. Leigh Geary - http://www.coolsmartphone.com/news5648.html  What was Scott working on on his flight back to the US?   Technorati Tags: VS2010,MVC2,WP7S,WP7 Follow: @CAMURPHY, @ColinMackay, @plip and of course @ScottGu

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >