Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 366/1024 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • How to translate the fields of a database model?

    - by Tõnis M
    I have some tables/models in a web app that will have multilingual content. For example a university, with it's description in a default language(english) and the user wants he can see the same information in another language( if the object has it's fields translated). If there were only a few languages then I would just add fields like name_en and name_de and so on, but the number of languages isn't fixed, so that' would create a mess. I could also just create a new object with the translated data but then foreign keys wouldn't work, and only some of the fields can be translated so that would create duplicate data. Storing the translations in a file and using gettext or something similar is also not an option since the objects fields can be translated by the website user, not only developers/admins. What would be the best way to design/architect such a database? Searching from the translated data should also be not too complex - as it should not require creating complex joins which would make the queries slower I'm using PostgreSQL and Ruby of Rails but I'm not looking for a technical solution but for a general idea how to design it.

    Read the article

  • What's the *right* way to handle a POST in FP?

    - by Malvolio
    I'm just getting started with FP and I'm using Scala, which may not be the best way, since I can always fall back to an imperative style if the going gets tough. I'd just rather not. I've got a very specific question that points to a broader lacuna in my understanding of FP. When a web application is processing a GET request, the user wants information that already exists on the web-site. The application only has to process and format the data in some way. The FB way is clear. When a web application is processing a POST request, the user wants change the information held on the site. True, the information is not typically held in application variables, it's in a database or a flat-file, but still, I get the feeling I'm not grokking FP properly. Is there a pattern for handling updates to static data in an FP language? My vague picture of this is that the application is handed the request and the then-current site state. The application does its thing and returns the new site-state. If the current site-state hasn't changed since the application started, the new state becomes the current state and the reply is sent back to the browser (this is my dim image of Clojure's style); if the current state has been changed (by another thread, well, something else happens ...

    Read the article

  • How to repair Java in Ubuntu after trying to switch to Java 6 using update-java-alternatives

    - by Kau-Boy
    I tried to switch from Java 5 to Java 6 using the "update-java-alternatives" command like explained on this page: https://help.ubuntu.com/community/Java But afterwards I get the following error when I tried to execute java: root@webserver:~# java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I also tried to reinstall the java binaries using "apt-get" but I didn't succeed reinstalling it. I would like to post the "apt-get" errors, but unfortunately I don't know how to print out the error messages in English and not in German. My system is a Ubuntu 8.04 ROOT server. Here is the (Google translated) english text tring to install Java 6 again: root@server:~# apt-get install sun-java6-jdk Reading package lists ... Ready Dependency tree Reading state information ... Ready sun-java6-jdk is already the newest version. sun-java6-jdk set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Set up a sun-java6-bin (6-03-0ubuntu2) ... Could not create the Java virtual machine. dpkg: error processing sun-java6-bin (- configure): Subprocess post-installation script returned error exit status 1 Errors were encountered while processing: sun-java6-bin E: Sub-process / usr / bin / dpkg returned an error code (1) I hope that this might help you helping me.

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • Right Language for the Job

    - by Manoj
    Using the right language for the job is the key - this is the comment I read in SO and I also belive thats the right thing to do. Because of this we ended up using different languages for different parts of the project - like perl, VBA(Excel Macros), C# etc. We have three to four languages currently in use inside the project. Using the right language for the job has made it immensly more easy to do automate a job, but of late people are complaining that any new person who has to take over the project will have to learn so many different languages to get started. Also it is difficult to find such kind of person. Please note that this is a one to two person working on the project maximum at a given point of time. I would like to know if the method we are following is right or should we converge to single language and try to use it across all the job even though another language might be better suited for it. Your experenece related to this would also help. Languages used and their purpose: Perl - Processing large text file(log files) C# with Silverlight for web based reporting. LabVIEW for automation Excel macros for processing data in excel sheets, generating graphs and exporting to powerpoint.

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • How do I call Matlab in a script on Windows?

    - by Benjamin Oakes
    I'm working on a project that uses several languages: SQL for querying a database Perl/Ruby for quick-and-dirty processing of the data from the database and some other bookkeeping Matlab for matrix-oriented computations Various statistics languages (SAS/R/SPSS) for processing the Matlab output Each language fits its niche well and we already have a fair amount of code in each. Right now, there's a lot of manual work to run all these steps that would be much better scripted. I've already done this on Linux, and it works relatively well. On Linux: matlab -nosplash -nodesktop -r "command" or echo "command" | matlab -nosplash -nodesktop ...opens Matlab in a "command line" mode. (That is, no windows are created -- it just reads from STDIN, executes, and outputs to STDOUT/STDERR.) My problem is that on Windows (XP and 7), this same code opens up a window and doesn't read from / write to the command line. It just stares me blankly in the face, totally ignoring STDIN and STDOUT. How can I script running Matlab commands on Windows? I basically want something that will do: ruby database_query.rb perl legacy_code.pl ruby other_stuff.rb matlab processing_step_1.m matlab processing_step_2.m # etc, etc. I've found out that Matlab has an -automation flag on Windows to start an "automation server". That sounds like overkill for my purposes, and I'd like something that works on both platforms. What options do I have for automating Matlab in this workflow?

    Read the article

  • Template access of symbol in unnamed namespace

    - by Fred Larson
    We are upgrading our XL C/C++ compiler from V8.0 to V10.1 and found some code that is now giving us an error, even though it compiled under V8.0. Here's a minimal example: test.h: #include <iostream> #include <string> template <class T> void f() { std::cout << TEST << std::endl; } test.cpp: #include <string> #include "test.h" namespace { std::string TEST = "test"; } int main() { f<int>(); return 0; } Under V10.1, we get the following error: "test.h", line 7.16: 1540-0274 (S) The name lookup for "TEST" did not find a declaration. "test.cpp", line 6.15: 1540-1303 (I) "std::string TEST" is not visible. "test.h", line 5.6: 1540-0700 (I) The previous message was produced while processing "f<int>()". "test.cpp", line 11.3: 1540-0700 (I) The previous message was produced while processing "main()". We found a similar difference between g++ 3.3.2 and 4.3.2. I also found in g++, if I move the #include "test.h" to be after the unnamed namespace declaration, the compile error goes away. So here's my question: what does the Standard say about this? When a template is instantiated, is that instance considered to be declared at the point where the template itself was declared, or is the standard not that clear on this point? I did some looking though the n2461.pdf draft, but didn't really come up with anything definitive.

    Read the article

  • How to construct objects based on XML code?

    - by the_drow
    I have XML files that are representation of a portion of HTML code. Those XML files also have widget declarations. Example XML file: <message id="msg"> <p> <Widget name="foo" type="SomeComplexWidget" attribute="value"> inner text here, sets another attribute or inserts another widget to the tree if needed... </Widget> </p> </message> I have a main Widget class that all of my widgets inherit from. The question is how would I create it? Here are my options: Create a compile time tool that will parse the XML file and create the necessary code to bind the widgets to the needed objects. Advantages: No extra run-time overhead induced to the system. It's easy to bind setters. Disadvantages: Adds another step to the build chain. Hard to maintain as every widget in the system should be added to the parser. Use of macros to bind the widgets. Complex code Find a method to register all widgets into a factory automatically. Advantages: All of the binding is done completely automatically. Easier to maintain then option 1 as every new widget will only need to call a WidgetFactory method that registers it. Disadvantages: No idea how to bind setters without introducing a maintainability nightmare. Adds memory and run-time overhead. Complex code What do you think is better? Can you guys suggest a better solution?

    Read the article

  • is there a simple timed lock algorithm avoiding deadlock on multiple mutexes?

    - by Vicente Botet Escriba
    C++0x thread library or Boost.thread define a non-member variadic template function that locks all mutex at once that helps to avoid deadlock. template <class L1, class L2, class... L3> void lock(L1&, L2&, L3&...); The same can be applied to a non-member variadic template function try_lock_until, which locks all the mutex until a given time is reached that helps to avoid deadlock like lock(...). template <class Clock, class Duration, class L1, class L2, class... L3> void try_lock_until( const chrono::time_point<Clock,Duration>& abs_time, L1&, L2&, L3&...); I have an implementation that follows the same design as the Boost function boost::lock(...). But this is quite complex. As I can be missing something evident I wanted to know if: is there a simple timed lock algorithm avoiding deadlock on multiple mutexes? If no simple implementation exists, can this justify a proposal to Boost? P.S. Please avoid posting complex solutions.

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • How to ask questions to an obstructionist?

    - by Rob Wells
    This is not related to my other recently posted question about "working with a star developer". In a similar vein, how do you work with someone who will only answer the specific question that you ask. I worked with someone who, when you asked a question on a specific aspect of the system, would give you the answer just related to the specific bit you'd asked about. For example, when processing radar messages I'd ask about an aspect of message number RJ546 and he would answer just about that specific part of RJ546. He wouldn't mention anything about the other freaky parts of the message, or mention any related aspects of the other messages. Then you'd go off and work on the processing and all of a sudden all this other freakiness would pop up. What's a good technique when working with this type of person? BTW I later found out that the person who I'd come in to replace had quit because he got sick and tired of having these surprises pop up due to the lack of information provided by this person. Edit: I forgot to add that the person was deliberately obstructionist and believed that job security came from hoarded knowledge not being disseminated.

    Read the article

  • Need help with jquery json data transfer from php file

    - by Scarface
    Hey guys I am trying to return the latest 10 results of a query from a php file, through json format, to a jquery getjson function that prints results. I am getting weird problems though. For example I am only getting 8 entries returned, and some are disordered, and sometimes nothing is returned. I am not really sure what I am doing wrong, so if anyone has any ideas I would really appreciate it. This is my query ($res) SELECT time, user, message FROM comments WHERE topic_id='$topic_id' ORDER BY time DESC LIMIT 10 This is the processing of the results while($row = mysql_fetch_array($res)){ $message=$row['message']; $user=$row['user']; if($row['message'] AND $row['time'] > $_GET['time']) $data[] = $row; } $out = json_encode($data); print $out; And this is the retrieval where prepare is just a function that returns information into a div $.getJSON(files+"processing.php?action=load&time="+0+"&topic_id="+topic_id+"&t=" + (new Date()), function(json) { if(json.length) { for(i=0; i < 10; i++) { $('#comment-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } } }); function prepare(response) { count++; var string = '<li class="comment-list" id="list-'+count+'">' //organize info into a div +'</li>'; return string; }

    Read the article

  • MySQL - Exclude rows from Select based on duplication of two columns

    - by Carson C.
    I am attempting to narrow results of an existing complex query based on conditional matches on multiple columns within the returned data set. I'll attempt to simplify the data as much as possible here. Assume that the following table structure represents the data that my existing complex query has already selected (here ordered by date): +----+-----------+------+------------+ | id | remote_id | type | date | +----+-----------+------+------------+ | 1 | 1 | A | 2011-01-01 | | 3 | 1 | A | 2011-01-07 | | 5 | 1 | B | 2011-01-07 | | 4 | 1 | A | 2011-05-01 | +----+-----------+------+------------+ I need to select from that data set based on the following criteria: If the pairing of remote_id and type is unique to the set, return the row always If the pairing of remote_id and type is not unique to the set, take the following action: Of the sets of rows for which the pairing of remote_id and type are not unique, return only the single row for which date is greatest and still less than or equal to now. So, if today is 2010-01-10, I'd like the data set returned to be: +----+-----------+------+------------+ | id | remote_id | type | date | +----+-----------+------+------------+ | 3 | 1 | A | 2011-01-07 | | 5 | 1 | B | 2011-01-07 | +----+-----------+------+------------+ For some reason I'm having no luck wrapping my head around this one. I suspect the answer lies in good application of group_by, but I just can't grasp it. Any help is greatly appreciated!

    Read the article

  • How to open multiple socket connections and do callbacks in PHP

    - by Click Upvote
    I'm writing some code which processes a queue of items. The way it works is this: Get the next item flagged as needing to be processed from the mysql database row. Request some info from a google API using Curl, wait until the info is returned. Do the remainder of the processing based on the info returned. Flag the item as processed in the db, move onto the next item. The problem is that on step # 2. Google sometimes takes 10-15 seconds to return the requested info, during this time my script has to remain halted and wait. I'm wondering if I could change the code to do the following instead: Get the next 5 items to be processed as usual. Request info for items 1-5 from google, one after the other. When the info for item 1 is returned, a 'callback' should be done which calls up a function or otherwise calls some code which then does the remainder of the processing on items 1-5. And then the script starts over until all pending items in db are marked processed. How can something like this be achieved?

    Read the article

  • What is wrong with my gtkrc file?

    - by PP
    I have written following gtkrc file from some other theme gtkrc file. This theme is normal theme with buttons using pixmap theme engine. I have also given background image to GtkEntry. Problem is that, When i use this theme my buttons doesn't show text one them and my entry box does not show cursor. Plus in engine "pixmap" tag I need to specify image name with it's path as I have already mentioned pixmap_path on the top of rc file but why I still need to specify the path in file = "xxx" # gtkrc file. pixmap_path "./backgrounds:./icons:./buttons:./emotions" gtk-button-images = 1 #Icon Sizes and color definitions gtk-icon-sizes = "gtk-small-toolbar=16,16:gtk-large-toolbar=24,24:gtk-button=16,16" gtk-toolbar-icon-size = GTK_ICON_SIZE_SMALL_TOOLBAR gtk_color_scheme = "fg_color:#000000\nbg_color:#848484\nbase_color:#000000\ntext_color:#000000\nselected_bg_color:#f39638\nselected_fg_color:#000000\ntooltip_bg_color:#634110\ntooltip_fg_color:#ffffff" style "theme-default" { xthickness = 10 ythickness = 10 GtkEntry::honors-transparent-bg-hint = 0 GtkMenuItem::arrow-spacing = 20 GtkMenuItem::horizontal-padding = 50 GtkMenuItem::toggle-spacing = 30 GtkOptionMenu::indicator-size = {11, 5} GtkOptionMenu::indicator-spacing = {6, 5, 4, 4} GtkTreeView::horizontal_separator = 5 GtkTreeView::odd_row_color = "#efefef" GtkTreeView::even_row_color = "#e3e3e3" GtkWidget::link-color = "#0062dc" # blue GtkWidget::visited-link-color = "#8c00dc" #purple GtkButton::default_border = { 0, 0, 0, 0 } GtkButton::child-displacement-x = 0 GtkButton::child-displacement-y = 1 GtkWidget::focus-padding = 0 GtkRange::trough-border = 0 GtkRange::slider-width = 19 GtkRange::stepper-size = 19 GtkScrollbar::min_slider_length = 36 GtkScrollbar::has-secondary-backward-stepper = 1 GtkPaned::handle_size = 8 GtkMenuBar::internal-padding = 0 GtkTreeView::expander_size = 13 #15 GtkExpander::expander_size = 13 #17 GtkScale::slider-length = 35 GtkScale::slider-width = 17 GtkScale::trough-border = 0 GtkWidget::link-color = "#0062dc" GtkWidget::visited-link-color = "#8c00dc" #purple WnckTasklist::fade-overlay-rect = 0 WnckTasklist::fade-loop-time = 5.0 # 5 seconds WnckTasklist::fade-opacity = 0.5 # final opacity #makes menu only overlap border GtkMenu::horizontal-offset = -1 #removes extra padding at top and bottom of menus. Makes menuitem overlap border GtkMenu::vertical-padding = 0 #set to the same as roundness, used for better hotspot selection of tabs GtkNotebook::tab-curvature = 2 GtkNotebook::tab-overlap = 4 GtkMenuItem::arrow-spacing = 10 GtkOptionMenu ::indicator-size = {11, 5} GtkCheckButton ::indicator-size = 16 GtkCheckButton ::indicator-spacing = 1 GtkRadioButton ::indicator-size = 16 GtkTreeView::horizontal_separator = 2 GtkTreeView::odd_row_color = "#efefef" GtkTreeView::even_row_color = "#e3e3e3" NautilusIconContainer::normal_icon_color = "#ff0000" GtkEntry::inner-border = {0, 0, 0, 0} GtkScrolledWindow::scrollbar-spacing = 0 GtkScrolledWindow::scrollbars-within-bevel = 1 fg[NORMAL] = @fg_color fg[ACTIVE] = @fg_color fg[PRELIGHT] = @fg_color fg[SELECTED] = @selected_fg_color fg[INSENSITIVE] = shade (3.0,@fg_color) bg[NORMAL] = @bg_color bg[ACTIVE] = shade (0.95,@bg_color) bg[PRELIGHT] = mix(0.92, shade (1.1,@bg_color), @selected_bg_color) bg[SELECTED] = @selected_bg_color bg[INSENSITIVE] = shade (1.06,@bg_color) base[NORMAL] = @base_color base[ACTIVE] = shade (0.65,@base_color) base[PRELIGHT] = @base_color base[SELECTED] = @selected_bg_color base[INSENSITIVE] = shade (1.025,@bg_color) text[NORMAL] = @text_color text[ACTIVE] = shade (0.95,@base_color) text[PRELIGHT] = @text_color text[SELECTED] = @selected_fg_color text[INSENSITIVE] = mix (0.675,shade (0.95,@bg_color),@fg_color) } style "theme-entry" { xthickness = 10 ythickness = 10 GtkEntry::inner-border = {10, 10, 10, 10} GtkEntry::progress-border = {10, 10, 10, 10} GtkEntry::icon-prelight = 1 GtkEntry::state-hintt = 1 #GtkEntry::honors-transparent-bg-hint = 1 text[NORMAL] = "#000000" text[ACTIVE] = "#787878" text[INSENSITIVE] = "#787878" text[SELECTED] = "#FFFFFF" engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = FALSE file = "./backgrounds/entry_background.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = FLAT_BOX state = PRELIGHT recolorable = FALSE file = "./backgrounds/entry_background.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = FLAT_BOX state = ACTIVE recolorable = FALSE file = "./backgrounds/entry_background.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } #----------------------------------------------- #Chat Balloon Incoming background. style "theme-event-box-top-in" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_in_top.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-mid-in" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_in_mid.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-bot-in" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_in_bot.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } #----------------------------------------------- #Chat Balloon Outgoing background. style "theme-event-box-top-out" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_out_top.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-mid-out" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_out_mid.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-event-box-bot-out" { xthickness = 1 ythickness = 1 GtkEventBox::inner-border = {0, 0, 0, 0} engine "pixmap" { image { function = FLAT_BOX state = NORMAL recolorable = TRUE file = "./backgrounds/chat_out_bot.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-wide" = "theme-default" { xthickness = 2 ythickness = 2 } style "theme-wider" = "theme-default" { xthickness = 3 ythickness = 3 } style "theme-button" { GtkButton::inner-border = {0, 0, 0, 0} GtkWidget::focus-line-width = 0 GtkWidget::focus-padding = 0 bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#ff0000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#ffffff" fg[INSENSITIVE] = "#000000" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" text[NORMAL] = "#ff0000" text[INSENSITIVE] = "#ff0000" text[PRELIGHT] = "#ff0000" text[SELECTED] = "#ff0000" text[INSENSITIVE] = "#434346" text[ACTIVE] = "#ff0000" base[NORMAL] = "#ff0000" base[INSENSITIVE] = "#ff0000" base[PRELIGHT] = "#ff0000" base[SELECTED] = "#ff0000" base[INSENSITIVE] = "#ff0000" engine "pixmap" { image { function = BOX state = NORMAL recolorable = TRUE file = "./buttons/LightButtonAct.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = BOX state = PRELIGHT recolorable = TRUE file = "./buttons/LightButtonRoll.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = BOX state = ACTIVE recolorable = TRUE file = "./buttons/LightButtonClicked.png" border = { 0, 0, 0, 0 } stretch = TRUE } image { function = BOX state = INSENSITIVE recolorable = TRUE file = "./buttons/LightButtonInact.png" border = { 0, 0, 0, 0 } stretch = TRUE } } } style "theme-toolbar" { xthickness = 2 ythickness = 2 bg[NORMAL] = shade (1.078,@bg_color) } style "theme-handlebox" { bg[NORMAL] = shade (0.95,@bg_color) } style "theme-scale" { bg[NORMAL] = shade (1.06, @bg_color) bg[PRELIGHT] = mix(0.85, shade (1.1,@bg_color), @selected_bg_color) bg[SELECTED] = "#4d4d55" } style "theme-range" { bg[NORMAL] = shade (1.12,@bg_color) bg[ACTIVE] = @bg_color bg[PRELIGHT] = mix(0.95, shade (1.10,@bg_color), @selected_bg_color) #Arrows text[NORMAL] = shade (0.275,@selected_fg_color) text[PRELIGHT] = @selected_fg_color text[ACTIVE] = shade (0.10,@selected_fg_color) text[INSENSITIVE] = mix (0.80,shade (0.90,@bg_color),@fg_color) } style "theme-notebook" = "theme-wider" { xthickness = 4 ythickness = 4 GtkNotebook::tab-curvature = 5 GtkNotebook::tab-vborder = 1 GtkNotebook::tab-overlap = 1 GtkNotebook::tab-vborder = 1 bg[NORMAL] = "#d2d2d2" bg[ACTIVE] = "#e3e3e3" bg[PRELIGHT] = "#848484" bg[SELECTED] = "#848484" bg[INSENSITIVE] = "#848484" text[PRELIGHT] = @selected_fg_color text[NORMAL] = "#000000" text[ACTIVE] = "#737373" text[SELECTED] = "#000000" text[INSENSITIVE] = "#737373" fg[PRELIGHT] = @selected_fg_color fg[NORMAL] = "#000000" fg[ACTIVE] = "#737373" fg[SELECTED] = "#000000" fg[INSENSITIVE] = "#737373" } style "theme-paned" { bg[PRELIGHT] = shade (1.1,@bg_color) } style "theme-panel" { # Menu fg[PRELIGHT] = @selected_fg_color font_name = "Bold 9" text[PRELIGHT] = @selected_fg_color } style "theme-menu" { xthickness = 0 ythickness = 0 bg[NORMAL] = shade (1.16,@bg_color) bg[SELECTED] = "#ff9a00" text[PRELIGHT] = @selected_fg_color fg[PRELIGHT] = @selected_fg_color } style "theme-menu-item" = "theme-menu" { xthickness = 3 ythickness = 3 base[SELECTED] = "#ff9a00" base[NORMAL] = "#ff9a00" base[PRELIGHT] = "#ff9a00" base[INSENSITIVE] = "#ff9a00" base[ACTIVE] = "#ff9a00" bg[SELECTED] = "#ff9a00" bg[NORMAL] = shade (1.16,@bg_color) } style "theme-menubar" { #TODO } style "theme-menubar-item" = "theme-menu-item" { #TODO bg[SELECTED] = "#ff9a00" } style "theme-tree" { xthickness = 2 ythickness = 1 font_name = "Bold 9" GtkWidget::focus-padding = 0 bg[NORMAL] = "#5a595a" bg[PRELIGHT] = "#5a595a" bg[ACTIVE] = "#5a5a5a" fg[NORMAL] = "#ffffff" fg[ACTIVE] = "#ffffff" fg[SELECTED] = "#ff9a00" fg[PRELIGHT] = "#ffffff" bg[SELECTED] = "#ff9a00" base[SELECTED] = "#ff9a00" base[NORMAL] = "#ff9a00" base[PRELIGHT] = "#ff9a00" base[INSENSITIVE] = "#ff9a00" base[ACTIVE] = "#ff9a00" text[NORMAL] = "#000000" text[PRELIGHT] = "#ff9a00" text[ACTIVE] = "#ff9a00" text[SELECTED] = "#ff9a00" text[INSENSITIVE] = "#434346" } style "theme-tree-arrow" { bg[NORMAL] = mix(0.70, shade (0.60,@bg_color), shade (0.80,@selected_bg_color)) bg[PRELIGHT] = mix(0.80, @bg_color, @selected_bg_color) } style "theme-progressbar" { font_name = "Bold" bg[SELECTED] = @selected_bg_color fg[PRELIGHT] = @selected_fg_color bg[ACTIVE] = "#fe7e00" bg[NORMAL] = "#ffba00" } style "theme-tooltips" = "theme-wider" { font_name = "Liberation sans 10" bg[NORMAL] = @tooltip_bg_color fg[NORMAL] = @tooltip_fg_color text[NORMAL] = @tooltip_fg_color } style "theme-combo" = "theme-button" { xthickness = 4 ythickness = 4 text[NORMAL] = "#fd7d00" text[INSENSITIVE] = "#8a8a8a" base[NORMAL] = "#e0e0e0" base[INSENSITIVE] = "#aeaeae" } style "theme-combo-box" = "theme-button" { xthickness = 3 ythickness = 2 bg[NORMAL] = "#343539" bg[PRELIGHT] = "#343539" bg[ACTIVE] = "#26272b" bg[INSENSITIVE] = "#404145" } style "theme-entry-combo-box" { xthickness = 6 ythickness = 3 text[NORMAL] = "#000000" text[INSENSITIVE] = "#8a8a8a" base[NORMAL] = "#ffffff" base[INSENSITIVE] = "#aeaeae" } style "theme-combo-arrow" = "theme-button" { xthickness = 1 ythickness = 1 } style "theme-view" { xthickness = 0 ythickness = 0 } style "theme-check-radio-buttons" { GtkWidget::interior-focus = 0 GtkWidget::focus-padding = 1 text[NORMAL] = "#ff0000" base[NORMAL] = "#ff0000" text[SELECTED] = "#ffffff" text[INSENSITIVE] = shade (0.625,@bg_color) base[PRELIGHT] = mix(0.80, @base_color, @selected_bg_color) bg[NORMAL] = "#438FC6" bg[INSENSITIVE] = "#aeaeae" bg[SELECTED] = "#ff8a01" } style "theme-radio-buttons" = "theme-button" { GtkWidget::interior-focus = 0 GtkWidget::focus-padding = 1 text[SELECTED] = @selected_fg_color text[INSENSITIVE] = shade (0.625,@bg_color) base[PRELIGHT] = mix(0.80, @base_color, @selected_bg_color) bg[NORMAL] = "#ffffff" bg[INSENSITIVE] = "#dcdcdc" bg[SELECTED] = @selected_bg_color } style "theme-spin-button" { bg[NORMAL] = "#d2d2d2" bg[ACTIVE] = "#868686" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = shade(1.10,@selected_bg_color) bg[INSENSITIVE] = "#dcdcdc" base[NORMAL] = "#ffffff" base[INSENSITIVE] = "#dcdcdc" text[NORMAL] = "#000000" text[INSENSITIVE] = "#aeaeae" } style "theme-calendar" { xthickness = 0 ythickness = 0 bg[NORMAL] = "#676767" bg[PRELIGHT] = shade(0.92,@bg_color) bg[ACTIVE] = "#ff0000" bg[INSENSITIVE] = "#ff0000" bg[SELECTED] = "#ff0000" text[PRELIGHT] = "#000000" text[NORMAL] = "#000000" text[INSENSITIVE]= "#000000" text[SELECTED] = "#ffffff" text[ACTIVE] = "#000000" fg[NORMAL] = "#ffffff" fg[PRELIGHT] = "#ffffff" fg[INSENSITIVE] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" base[NORMAL] = "#ff0000" base[NORMAL] = "#aeaeae" base[INSENSITIVE] = "#00ff00" base[SELECTED] = "#f3720d" base[ACTIVE] = "#f3720d" } style "theme-separator-menu-item" { xthickness = 1 ythickness = 0 GtkSeparatorMenuItem::horizontal-padding = 2 # We are setting the desired height by using wide-separators # There is no other way to get the odd height ... GtkWidget::wide-separators = 1 GtkWidget::separator-width = 1 GtkWidget::separator-height = 5 } style "theme-frame" { xthickness = 10 ythickness = 0 GtkWidget::LABEL-SIDE-PAD = 14 GtkWidget::LABEL-PAD = 23 fg[NORMAL] = "#000000" fg[ACTIVE] = "#000000" fg[PRELIGHT] = "#000000" fg[SELECTED] = "#000000" fg[INSENSITIVE] = "#000000" bg[NORMAL] = "#e2e2e2" bg[ACTIVE] = "#000000" bg[PRELIGHT] = "#000000" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#000000" base[NORMAL] = "#000000" base[ACTIVE] = "#000000" base[PRELIGHT] = "#000000" base[SELECTED] = "#000000" base[INSENSITIVE]= "#000000" text[NORMAL] = "#000000" text[ACTIVE] = "#000000" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[INSENSITIVE]= "#000000" } style "theme-textview" { text[NORMAL] = "#000000" text[ACTIVE] = "#000000" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[INSENSITIVE] = "#434648" bg[NORMAL] = "#ffffff" bg[ACTIVE] = "#ffffff" bg[PRELIGHT] = "#ffffff" bg[SELECTED] = "#ffffff" bg[INSENSITIVE] = "#ffffff" fg[NORMAL] = "#ffffff" fg[ACTIVE] = "#ffffff" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[INSENSITIVE] = "#ffffff" base[NORMAL] = "#ffffff" base[ACTIVE] = "#ffffff" base[PRELIGHT] = "#ffffff" base[SELECTED] = "#ff9a00" base[INSENSITIVE] = "#ffffff" } style "theme-clist" { text[NORMAL] = "#000000" text[ACTIVE] = "#000000" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[INSENSITIVE] = "#434648" bg[NORMAL] = "#353438" bg[ACTIVE] = "#ff9a00" bg[PRELIGHT] = "#ff9a00" bg[SELECTED] = "#ff9a00" bg[INSENSITIVE] = "#ffffff" fg[NORMAL] = "#000000" fg[ACTIVE] = "#ff9a00" fg[PRELIGHT] = "#ff9a00" fg[SELECTED] = "#fdff00" fg[INSENSITIVE] = "#757575" base[NORMAL] = "#ffffff" base[ACTIVE] = "#fdff00" base[PRELIGHT] = "#000000" base[SELECTED] = "#fdff00" base[INSENSITIVE] = "#757575" } style "theme-label" { bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#000000" fg[INSENSITIVE] = "#434346" fg[PRELIGHT] = "#000000" fg[SELECTED] = "#000000" fg[ACTIVE] = "#000000" text[NORMAL] = "#ffffff" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#ffffff" text[SELECTED] = "#ffffff" text[ACTIVE] = "#ffffff" base[NORMAL] = "#000000" base[INSENSITIVE] = "#00ff00" base[PRELIGHT] = "#0000ff" base[ACTIVE] = "#f39638" } style "theme-button-label" { bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#ffffff" fg[INSENSITIVE] = "#434346" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" text[NORMAL] = "#000000" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[ACTIVE] = "#000000" base[NORMAL] = "#000000" base[INSENSITIVE] = "#00ff00" base[PRELIGHT] = "#0000ff" base[SELECTED] = "#ff00ff" base[ACTIVE] = "#ffff00" } style "theme-button-check-radio-label" { bg[NORMAL] = "#414143" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" fg[NORMAL] = "#000000" fg[INSENSITIVE] = "#434346" fg[PRELIGHT] = "#000000" fg[SELECTED] = "#000000" fg[ACTIVE] = "#000000" text[NORMAL] = "#ffffff" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#ffffff" text[SELECTED] = "#000000" text[ACTIVE] = "#ffffff" base[NORMAL] = "#000000" base[INSENSITIVE] = "#00ff00" base[PRELIGHT] = "#0000ff" base[SELECTED] = "#ff00ff" base[ACTIVE] = "#ffff00" } style "theme-table" { bg[NORMAL] = "#848484" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#7f4426" bg[SELECTED] = "#000000" bg[INSENSITIVE] = "#434346" } style "theme-iconview" { GtkWidget::focus-line-width=1 bg[NORMAL] = "#000000" bg[ACTIVE] = "#c19676" bg[PRELIGHT] = "#c19676" bg[SELECTED] = "#c19676" bg[INSENSITIVE] = "#969696" fg[NORMAL] = "#ffffff" fg[INSENSITIVE] = "#ffffff" fg[PRELIGHT] = "#ffffff" fg[SELECTED] = "#ffffff" fg[ACTIVE] = "#ffffff" text[NORMAL] = "#000000" text[INSENSITIVE] = "#434346" text[PRELIGHT] = "#000000" text[SELECTED] = "#000000" text[ACTIVE] = "#000000" base[NORMAL] = "#ffffff" base[INSENSITIVE] = "#434346" base[PRELIGHT] = "#FAD184" base[SELECTED] = "#FAD184" base[ACTIVE] = "#FAD184" } # Set Widget styles class "GtkWidget" style "theme-default" class "GtkScale" style "theme-scale" class "GtkRange" style "theme-range" class "GtkPaned" style "theme-paned" class "GtkFrame" style "theme-frame" class "GtkMenu" style "theme-menu" class "GtkMenuBar" style "theme-menubar" class "GtkEntry" style "theme-entry" class "GtkProgressBar" style "theme-progressbar" class "GtkToolbar" style "theme-toolbar" class "GtkSeparator" style "theme-wide" class "GtkCalendar" style "theme-calendar" class "GtkTable" style "theme-table" widget_class "*<GtkMenuItem>*" style "theme-menu-item" widget_class "*<GtkMenuBar>.<GtkMenuItem>*" style "theme-menubar-item" widget_class "*<GtkSeparatorMenuItem>*" style "theme-separator-menu-item" widget_class "*<GtkLabel>" style "theme-label" widget_class "*<GtkButton>" style "theme-button" widget_class "*<GtkButton>*<GtkLabel>*" style "theme-button-label" widget_class "*<GtkCheckButton>" style "theme-check-radio-buttons" widget_class "*<GtkToggleButton>.<GtkLabel>*" style "theme-button" widget_class "*<GtkCheckButton>.<GtkLabel>*" style "theme-button-check-radio-label" widget_class "*<GtkRadioButton>.<GtkLabel>*" style "theme-button-check-radio-label" widget_class "*<GtkTextView>" style "theme-textview" widget_class "*<GtkList>" style "theme-textview" widget_class "*<GtkCList>" style "theme-clist" widget_class "*<GtkIconView>" style "theme-iconview" widget_class "*<GtkHandleBox>" style "theme-handlebox" widget_class "*<GtkNotebook>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkEventBox>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkDrawingArea>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkLayout>" style "theme-notebook" widget_class "*<GtkNotebook>*<GtkViewport>" style "theme-notebook" widget_class "*<GtkNotebook>.<GtkLabel>*" style "theme-notebook" #for tabs # Combo Box Stuff widget_class "*<GtkCombo>*" style "theme-combo" widget_class "*<GtkComboBox>*<GtkButton>" style "theme-combo-box" widget_class "*<GtkComboBoxEntry>*" style "theme-entry-combo-box" widget_class "*<GtkSpinButton>*" style "theme-spin-button" widget_class "*<GtkSpinButton>*<GtkArrow>*" style:highest "theme-tree-arrow" # Tool Tips Stuff widget "gtk-tooltip*" style "theme-tooltips" # Tree View Stuff widget_class "*<GtkTreeView>.<GtkButton>*" style "theme-tree" widget_class "*<GtkCTree>.<GtkButton>*" style "theme-tree" widget_class "*<GtkList>.<GtkButton>*" style "theme-tree" widget_class "*<GtkCList>.<GtkButton>*" style "theme-tree" # For arrow bg widget_class "*<GtkTreeView>.<GtkButton>*<GtkArrow>" style "theme-tree-arrow" widget_class "*<GtkCTree>.<GtkButton>*<GtkArrow>" style "theme-tree-arrow" widget_class "*<GtkList>.<GtkButton>*<GtkArrow>" style "theme-tree-arrow" ####################################################### ## GNOME specific ####################################################### widget_class "*.ETree.ECanvas" style "theme-tree" widget_class "*.ETable.ECanvas" style "theme-tree" style "panelbuttons" = "theme-button" { # As buttons are draw lower this helps center text xthickness = 3 ythickness = 3 } widget_class "*Panel*<GtkButton>*" style "panelbuttons" style "murrine-fg-is-text-color-workaround" { text[NORMAL] = "#000000" text[ACTIVE] = "#fdff00" text[SELECTED] = "#fdff00" text[INSENSITIVE] = "#757575" bg[SELECTED] = "#b85e03" bg[ACTIVE] = "#b85e03" bg[SELECTED] = "#b85e03" fg[SELECTED] = "#ffffff" fg[NORMAL] = "#ffffff" fg[ACTIVE] = "#ffffff" fg[INSENSITIVE] = "#434348" fg[PRELIGHT] = "#ffffff" base[SELECTED] = "#ff9a00" base[NORMAL] = "#ffffff" base[ACTIVE] = "#ff9a00" base[INSENSITIVE] = "#434348" base[PRELIGHT] = "#ffffff" } widget_class "*.<GtkTreeView>*" style "murrine-fg-is-text-color-workaround" style "murrine-combobox-text-color-workaround" { text[NORMAL] = "#FFFFF" text[PRELIGHT] = "#FFFFF" text[SELECTED] = "#FFFFF" text[ACTIVE] = "#FFFFF" text[INSENSITIVE] = "#FFFFF" } widget_class "*.<GtkComboBox>.<GtkCellView>" style "murrine-combobox-text-color-workaround" style "murrine-menuitem-text-is-fg-color-workaround" { bg[NORMAL] = "#0000ff" text[NORMAL] = "#ffffff" text[PRELIGHT] = "#ffffff"#"#FD7D00" text[SELECTED] = "#ffffff"#"#ff0000"# @selected_fg_color text[ACTIVE] = "#ffffff"#"#ff0000"# "#FD7D00" text[INSENSITIVE] = "#ffffff"#ff0000"# "#414143" } widget "*.gtk-combobox-popup-menu.*" style "murrine-menuitem-text-is-fg-color-workaround"

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Cannot see the variable In my own JQuery plugin's function.

    - by qinHaiXiang
    I am writing one of my own JQuery plugin. And I got some strange which make me confused. I am using JQuery UI datepicker with my plugin. ;(function($){ var newMW = 1, mwZIndex = 0; // IgtoMW contructor Igtomw = function(elem , options){ var activePanel, lastPanel, daysWithRecords, sliding; // used to check the animation below is executed to the end. // used to access the plugin's default configuration this.opts = $.extend({}, $.fn.igtomw.defaults, options); // intial the model window this.intialMW(); }; $.extend(Igtomw.prototype, { // intial model window intialMW : function(){ this.sliding = false; //this.daysWithRecords = []; this.igtoMW = $('<div />',{'id':'igto'+newMW,'class':'igtoMW',}) .css({'z-index':mwZIndex}) // make it in front of all exist model window; .appendTo('body') .draggable({ containment: 'parent' , handle: '.dragHandle' , distance: 5 }); //var igtoWrapper = igtoMW.append($('<div />',{'class':'igtoWrapper'})); this.igtoWrapper = $('<div />',{'class':'igtoWrapper'}).appendTo(this.igtoMW); this.igtoOpacityBody = $('<div />',{'class':'igtoOpacityBody'}).appendTo(this.igtoMW); //var igtoHeaderInfo = igtoWrapper.append($('<div />',{'class':'igtoHeaderInfo dragHandle'})); this.igtoHeaderInfo = $('<div />',{'class':'igtoHeaderInfo dragHandle'}) .appendTo(this.igtoWrapper); this.igtoQuickNavigation = $('<div />',{'class':'igtoQuickNavigation'}) .css({'color':'#fff'}) .appendTo(this.igtoWrapper); this.igtoContentSlider = $('<div />',{'class':'igtoContentSlider'}) .appendTo(this.igtoWrapper); this.igtoQuickMenu = $('<div />',{'class':'igtoQuickMenu'}) .appendTo(this.igtoWrapper); this.igtoFooter = $('<div />',{'class':'igtoFooter dragHandle'}) .appendTo(this.igtoWrapper); // append to igtoHeaderInfo this.headTitle = this.igtoHeaderInfo.append($('<div />',{'class':'headTitle'})); // append to igtoQuickNavigation this.igQuickNav = $('<div />', {'class':'igQuickNav'}) .html('??') .appendTo(this.igtoQuickNavigation); // append to igtoContentSlider this.igInnerPanelTopMenu = $('<div />',{'class':'igInnerPanelTopMenu'}) .appendTo(this.igtoContentSlider); this.igInnerPanelTopMenu.append('<div class="igInnerPanelButtonPreWrapper"><a href="" class="igInnerPanelButton Pre" action="" style="background-image:url(images/igto/igInnerPanelTopMenu.bt.bg.png);"></a></div>'); this.igInnerPanelTopMenu.append('<div class="igInnerPanelSearch"><input type="text" name="igInnerSearch" /><a href="" class="igInnerSearch">??</a></div>' ); this.igInnerPanelTopMenu.append('<div class="igInnerPanelButtonNextWrapper"><a href="" class="igInnerPanelButton Next" action="sm" style="background-image:url(images/igto/igInnerPanelTopMenu.bt.bg.png); background-position:-272px"></a></div>' ); this.igInnerPanelBottomMenu = $('<div />',{'class':'igInnerPanelBottomMenu'}) .appendTo(this.igtoContentSlider); this.icWrapper = $('<div />',{'class':'icWrapper','id':'igto'+newMW+'Panel'}) .appendTo(this.igtoContentSlider); this.icWrapperCotentPre = $('<div class="slider pre"></div>').appendTo(this.icWrapper); this.icWrapperCotentShow = $('<div class="slider firstShow "></div>').appendTo(this.icWrapper); this.icWrapperCotentnext = $('<div class="slider next"></div>').appendTo(this.icWrapper); this.initialPanel(); this.initialQuickMenus(); console.log(this.leftPad(9)); newMW++; mwZIndex++; this.igtoMW.bind('mousedown',function(){ var $this = $(this); //alert($this.css('z-index') + ' '+mwZIndex); if( parseInt($this.css('z-index')) === (mwZIndex-1) ) return; $this.css({'z-index':mwZIndex}); mwZIndex++; //alert(mwZIndex); }); }, initialPanel : function(){ this.defaultPanelNum = this.opts.initialPanel; this.activePanel = this.defaultPanelNum; this.lastPanel = this.defaultPanelNum; this.defaultPanel = this.loadPanelContents(this.defaultPanelNum); $(this.defaultPanel).appendTo(this.icWrapperCotentShow); }, initialQuickMenus : function(){ // store the current element var obj = this; var defaultQM = this.opts.initialQuickMenu; var strMenu = ''; var marginFirstEle = '8'; $.each(defaultQM,function(key,value){ //alert(key+':'+value); if(marginFirstEle === '8'){ strMenu += '<a href="" class="btPanel" panel="'+key+'" style="margin-left: 8px;" >'+value+'</a>'; marginFirstEle = '4'; } else{ strMenu += '<a href="" class="btPanel" panel="'+key+'" style="margin-left: 4px;" >'+value+'</a>'; } }); // append to igtoQuickMenu this.igtoQMenu = $(strMenu).appendTo(this.igtoQuickMenu); this.igtoQMenu.bind('click',function(event){ event.preventDefault(); var element = $(this); if(element.is('.active')){ return; } else{ $(obj.igtoQMenu).removeClass('active'); element.addClass('active'); } var d = new Date(); var year = d.getFullYear(); var month = obj.leftPad( d.getMonth() ); var inst = null; if( obj.sliding === false){ console.log(obj.lastPanel); var currentPanelNum = parseInt(element.attr('panel')); obj.checkAvailability(); obj.getDays(year,month,inst,currentPanelNum); obj.slidePanel(currentPanelNum); obj.activePanel = currentPanelNum; console.log(obj.activePanel); obj.lastPanel = obj.activePanel; obj.icWrapper.find('input').val(obj.activePanel); } }); }, initialLoginPanel : function(){ var obj = this; this.igPanelLogin = $('<div />',{'class':"igPanelLogin"}); this.igEnterName = $('<div />',{'class':"igEnterName"}).appendTo(this.igPanelLogin); this.igInput = $('<input type="text" name="name" value="???" />').appendTo(this.igEnterName); this.igtoLoginBtWrap = $('<div />',{'class':"igButtons"}).appendTo(this.igPanelLogin); this.igtoLoginBt = $('<a href="" class="igtoLoginBt" action="OK" >??</a>\ <a href="" class="igtoLoginBt" action="CANCEL" >??</a>\ <a href="" class="igtoLoginBt" action="ADD" >????</a>').appendTo(this.igtoLoginBtWrap); this.igtoLoginBt.bind('click',function(event){ event.preventDefault(); var elem = $(this); var action = elem.attr('action'); var userName = obj.igInput.val(); obj.loadRootMenu(); }); return this.igPanelLogin; }, initialWatchHistory : function(){ var obj = this; // for thirt part plugin used if(this.sliding === false){ this.watchHistory = $('<div />',{'class':'igInnerPanelSlider'}).append($('<div />',{'class':'igInnerPanel_pre'}).addClass('igInnerPanel')) .append($('<div />',{'class':'igInnerPanel'}).datepicker({ dateFormat: 'yy-mm-dd',defaultDate: '2010-12-01' ,showWeek: true,firstDay: 1, //beforeShow:setDateStatistics(), onChangeMonthYear:function(year, month, inst) { var panelNum = 1; month = obj.leftPad(month); obj.getDays(year,month,inst,panelNum); } , beforeShowDay: obj.checkAvailability, onSelect: function(dateText, inst) { obj.checkAvailability(); } }).append($('<div />',{'class':'extraMenu'})) ) .append($('<div />',{'class':'igInnerPanel_next'}).addClass('igInnerPanel')); return this.watchHistory; } }, loadPanelContents : function(panelNum){ switch(panelNum){ case 1: alert('inside loadPanelContents') return this.initialWatchHistory(); break; case 2: return this.initialWatchHistory(); break; case 3: return this.initialWatchHistory(); break; case 4: return this.initialWatchHistory(); break; case 5: return this.initialLoginPanel(); break; } }, loadRootMenu : function(){ var obj = this; var mainMenuPanel = $('<div />',{'class':'igRootMenu'}); var currentMWId = this.igtoMW.attr('id'); this.activePanel = 0; $('#'+currentMWId+'Panel .pre'). queue(function(next){ $(this). html(mainMenuPanel). addClass('panelShow'). removeClass('pre'). attr('panelNum',0); next(); }). queue(function(next){ $('<div style="width:0;" class="slider pre"></div>'). prependTo('#'+currentMWId+'Panel').animate({width:348}, function(){ $('#'+currentMWId+'Panel .slider:last').remove() $('#'+currentMWId+'Panel .slider:last').replaceWith('<div class="slider next"></div>'); $('.btMenu').remove(); // remove bottom quick menu obj.sliding = false; $(this).removeAttr('style'); }); $('.igtoQuickMenu .active').removeClass('active'); next(); }); }, slidePanel : function(currentPanelNum){ var currentMWId = this.igtoMW.attr('id'); var obj = this; //alert(obj.loadPanelContents(currentPanelNum)); if( this.activePanel > currentPanelNum){ $('#'+currentMWId+'Panel .pre'). queue(function(next){ alert('inside slidePanel') //var initialDate = getPanelDateStatus(panelNum); //console.log('intial day in bigger panel '+initialDate) $(this). html(obj.loadPanelContents(currentPanelNum)). addClass('panelShow'). removeClass('pre'). attr('panelNum',currentPanelNum); $('#'+currentMWId+'Panel .next').remove(); next(); }). queue(function(next){ $('<div style="width:0;" class="slider pre"></div>'). prependTo('#'+currentMWId+'Panel').animate({width:348}, function(){ //$('#igto1Panel .slider:last').find(setPanel(currentPanelNum)).datepicker('destroy'); $('#'+currentMWId+'Panel .slider:last').empty().removeClass('panelShow').addClass('next').removeAttr('panelNum'); $('#'+currentMWId+'Panel .slider:last').replaceWith('<div class="slider next"></div>') obj.sliding = false;console.log('inuse inside animation: '+obj.sliding); $(this).removeAttr('style'); }); next(); }); } else{ ///// current panel num smaller than next $('#'+currentMWId+'Panel .next'). queue(function(next){ $(this). html(obj.loadPanelContents(currentPanelNum)). addClass('panelShow'). removeClass('next'). attr('panelNum',currentPanelNum); $('<div class="slider next">empty</div>').appendTo('#'+currentMWId+'Panel'); next(); }). queue(function(next){ $('#'+currentMWId+'Panel .pre').animate({width:0}, function(){ $(this).remove(); //$('#igto1Panel .slider:first').find(setPanel(currentPanelNum)).datepicker('destroy'); $('#'+currentMWId+'Panel .slider:first').empty().removeClass('panelShow').addClass('pre').removeAttr('panelNum').removeAttr('style'); $('#'+currentMWId+'Panel .slider:first').replaceWith('<div class="slider pre"></div>') obj.sliding = false; console.log('inuse inside animation: '+obj.sliding); }); next(); }); } }, getDays : function(year,month,inst,panelNum){ var obj = this; // depand on the mysql qurey condition var table_of_record = 'moviewh';//getTable(panelNum); var date_of_record = 'watching_date';//getTableDateCol(panelNum); var date_to_find = year+'-'+month; var node_of_xml_date_list = 'whDateRecords';//getXMLDateNode(panelNum); var user_id = '1';//getLoginUserId(); //var daysWithRecords = []; // empty array before asigning this.daysWithRecords.length = 0; $.ajax({ type: "GET", url: "include/get.date.list.process.php", data:({ table_of_record : table_of_record,date_of_record:date_of_record,date_to_find:date_to_find,user_id:user_id,node_of_xml_date_list:node_of_xml_date_list }), dataType: "json", cache: false, // force broser don't cache the xml file async: false, // using this option to prevent datepicker refresh ??NO success:function(data){ // had no date records if(data === null) return; obj.daysWithRecords = data; } }); //setPanelDateStatus(year,month,panelNum); console.log('call from getdays() ' + this.daysWithRecords); }, checkAvailability : function(availableDays) { // var i; var checkdate = $.datepicker.formatDate('yy-mm-dd', availableDays); //console.log( checkdate); // for(var i = 0; i < this.daysWithRecords.length; i++) { // // if(this.daysWithRecords[i] == checkdate){ // // return [true, "available"]; // } // } //console.log('inside check availablility '+ this.daysWithRecords); //return [true, "available"]; console.log(typeof this.daysWithRecords) for(i in this.daysWithRecords){ //if(this.daysWithRecords[i] == checkdate){ console.log(typeof this.daysWithRecords[i]); //return [true, "available"]; //} } return [true, "available"]; //return [false, ""]; }, leftPad : function(num) { return (num < 10) ? '0' + num : num; } }); $.fn.igtomw = function(options){ // Merge options passed in with global defaults var opt = $.extend({}, $.fn.igtomw.defaults , options); return this.each(function() { new Igtomw(this,opt); }); }; $.fn.igtomw.defaults = { // 0:mainMenu 1:whatchHistor 2:requestHistory 3:userManager // 4:shoppingCart 5:loginPanel initialPanel : 5, // default panel is LoginPanel initialQuickMenu : {'1':'whatchHIstory','2':'????','3':'????','4':'????'} // defalut quick menu }; })(jQuery); usage: $('.openMW').click(function(event){ event.preventDefault(); $('<div class="">').igtomw(); }) HTML code: <div id="taskBarAndStartMenu"> <div class="taskBarAndStartMenuM"> <a href="" class="openMW" >??IGTO</a> </div> <div class="taskBarAndStartMenuO"></div> </div> In my work flow: when I click the "whatchHistory" button, my plugin would load a panel with JQuery UI datepicker applied which days had been set to be availabled or not. I am using the function "getDays()" to get the available days list and stored the data inside daysWithRecords, and final the UI datepicker's function "beforeShowDay()" called the function "checkAvailability()" to set the days. the variable "daysWithRecords" was declared inside Igtomw = function(elem , options) and was initialized inside the function getDays() I am using the function "initialWatchHistory()" to initialization and render the JQuery UI datepicker in the web. My problem is the function "checkAvailability()" cannot see the variable "daysWithRecords".The firebug prompts me that "daysWithRecords" is "undefined". this is the first time I write my first plugin. So .... Thank you very much for any help!!

    Read the article

  • CodePlex Daily Summary for Friday, March 12, 2010

    CodePlex Daily Summary for Friday, March 12, 2010New Projects.NET DEPENDENCY INJECTION: Abel Perez Enterprise FrameworkAutodocs - WCF REST Automatic API Documentation Generator: Autodocs is an automatic API documentation generator for .NET applications that use Windows Communication Foundation (WCF) to establish REST API's.BlockBlock: Block Block is a free game. You know Lumines and you will like BlockBlock.C4F XNA ASCII Post-Processing: This is the source code for the Coding4Fun article "XNA Effects – ASCII Art in 3D"ChequePrinter: this is ChequePrinterCompiladores MSIL usando Phoenix (PLP 2008.1 - CIn/UFPE): Este projeto foi feito com o intuito de explorar a plataforma Microsoft Phoenix para a construção de compiladores para MSIL de duas linguagens de E...CRM External View: CRM External View enables more robust control over exposing Microsoft CRM data (in a form of views) for external parties. The solution uses web ser...CS Project2: This is for the projectDotNetNuke IM Module of Facebook Like Messenger: Help you integrate 123 Web Messenger into DotNetNuke, and add a powerful 1-to-1 IM Software named "Facebook Messenger Style Web Chat Bar" at the bo...DotNetNuke® RadPanelBar: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Blocks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Armand Datema of Schwingsoft. This skin uses a bit of jQu...Drilltrough and filtering on SSAS-cubes in SSRS: We will describe a technique to create Reporting services (SSRS) reports that use Analysis services (SSAS) cubes as data sources, have a very intu...Ecosystem Diagnosis & Treatment: The Ecosystem DIagnosis & Treatment community provides tools, analyses and applications of the medical model to natural resource problems. EDT sof...ExIf 35: A utility for use by film photographers for keeping track of critical facts about images taken on a roll of film, just as digital cameras do automa...FabricadeTI: Desenvolvimento do framework FabricadeTI.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in...Flash Nut: Flash Nut is a flash card program. You can build and review decks of flash cards. The project is a vs2008 wpf application.Free DotNetNuke Chat Module (Popup Mode): With this free DotNetNuke Chat Module (Popup Mode), master will assist to integrate DotNetNuke with 123 Flash Chat seamlessly, and add a popup mode...Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: With this FREE application, you could integrate DNN website Database with 123 Web Messenger seamlessly and embed a web-based Friends List into anyw...Free DotNetNuke Live Help Module: With DotNetNuke Live Help Module, integrate 123 Live Help into DotNetNuke website and add Live Chat Button anywhere you like. Let visitors to chat ...G52GRP Videowall: NottinghamHappy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: The Happy Turtle project creates plugins for the Build Version Increment Add-In for Visual Studio (BVI). The focus is to automatically version asse...Hasher: Hasher es capaz de generar el hash MD5 y SHA de textos de hasta 100.000 caracteres y ficheros. También te permitirá comprobar dos hash para verifi...Infragistics Silverlight Extended Controls: This project is a group of controls that extend or add functionality to the Infragistics Silverlight control suite. This control requires Infragis...Insert Video Jnr: This is a baby version of my Video plugin, it is intended for Hosted Wordpress blogs only and shouldn't be used with other blog providers.jccc .NET smart framework: jccc .NET smart framework allows the creation of fast connections to MSSQL or MYSQL databases, and the data manipulation by using of c# class's tha...LytScript: 函数式脚本语言Microsoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): DDD NLayered App .NET 4.0 Example By Microsoft - Spain Domain Driven Design NLayered App .NET 4.0 Example Implementation Example of our local Arc...mimiKit: Lightweight ASP.NET MVC / Javascript Framework for creating mobile applications PHPWord: With PHPWord you can easily create a Word document with PHP. PHPWord creates docx Files that can include all major word functions like TextElements...Protocol Transition with BizTalk: An example solution the shows how todo Protocol Transition with BizTalk. This also shows you how to create a WCF extension to allow this to happen.Raid Runner: Raid Runner makes it easier to run and manage raid in World of Warcraft. It is a Silverlight application developed in c#SQL Server Authentication Troubleshooter: SQL Server Authentication Troubleshooter is a tool to help investigate a root cause of ‘Login Failed’ error in SQL Server. There could be number of...SuperviseObjects: SuperviseObjects consists of a collection which is derived from ObservableCollection<T>. This collection fires ItemPropertyChanging and ItemPropert...Viuto: Viuto.NET project aims to create a fully track and trace application. It is developed in: - Java & C: Firmware - C#: Parser - Asp.net: Tracki...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks is a collection that you cannot do without if you are serious about continous integration. Ever wish you could specify an...New ReleasesASP.NET: ASP.NET MVC 2 RTM: This release contains the source code for ASP.NET MVC 2 RTM as well as the ASP.NET MVC Futures project. The futures project contains features that ...C#Mail: Higuchi.Mail.dll (2010.3.11 ver): Higuchi.Mail.dll at 2010-3-11 version.C#Mail: Higuchi.MailServer.dll (2010.3.11 ver): Higuchi.MailServer.dll at 2010.3.11 version.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1 - Full Version: This is the full, complete example of the XNA ASCII FPS.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1.0 - Base Project: This is the base project to be used by those who plan to follow along the Coding4Fun article.CRM External View: 1.0: Release 1.0DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3c: Alpha 3c upgrades custom/virtual uris (devpacks), temp uris, and zip packages. This is believed to be the first fully functional/performant release.DotNetNuke® RadPanelBar: DNNRadPanelBar 1.0.0: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (inclu...Drilltrough and filtering on SSAS-cubes in SSRS: Release 1: Release 1ExIf 35: ExIf 35: Daily build of ExIf 35Family Tree Analyzer: Version 1.0.3.0: Version 1.0.3.0 Added options to check for updates on load and on help menu Disable use of US census for now until dealt with years being differen...Family Tree Analyzer: Version 1.0.4.0: Version 1.0.4.0 Added support for display of Ahnenfatel numbers Added filter to hide individuals from Lost Cousins report that have been flagged a...Flash Nut: Flash Nut 1.0 Setup: Flash Nut SetupFluent Validation for .NET: 1.2 RC: This is the release candidate for FluentValidation 1.2. If no bugs are found within the next couple of weeks, then this will become the 1.2 Final b...Free DotNetNuke Chat Module (Popup Mode): Download DNN Chat Module (Popup Mode)+Source Code: Feel free to download DotNetNuke Chat Module (Popup Mode), integrating DotNetNuke with 123 Flash Chat Software, and add a free popup mode flash cha...Free DotNetNuke Live Help Module: Download DNN Live Support Module and Source Code: In Readme file, there are detailed Installation and Integration Manual for you. This module is compatible with DotNetNuke v5.x.Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.44927: This is the first release of the SVN based version incrementor. How To InstallMake sure that Build Version Increment v2.2.10065.1524 or newer is i...Hasher: 1.0: Versión inicial de la aplicación: Obtención de hash MD5 y SHA. Codificación en tiempo real de textos de hasta 100.000 caracteres. Codificación ...Jamolina: PhotosynthDemo: PhotosynthDemoMapWindow GIS: MapWindow 6.0 msi (March 11): This fixes an PixelToProj problem for the Extended Buffer case, as well as adding fixes to the WKBFeatureReader to fix an X,Y reversal and some ext...Math.NET Numerics: 2010.3.11.291 Build: Latest alpha buildMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.5 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...MiniTwitter: 1.09.2: MiniTwitter 1.09.2 更新内容 修正 タイムラインを削除すると落ちるバグを修正 稀にタイムラインのスクロールが出来ないバグを修正Nestoria.NET: Nestoria.NET 0.8: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...Pod Thrower: Version 1.0: Here is version 1.0. It has all the features I was looking to do in it. Please let me know if you use this and if you would like any changes.SharePoint Ad Rotator: SPAdRotator 2.0 Beta: This new release of the Ad Rotator contains many new features. One major new feature is that jQuery has been added to do image rotation without hav...SharePoint Objects: Democode Ton Stegeman: These download contains sample code for some SharePoint 2007 blog posts: TST.Themes_Build20100311.zip contains a feature receiver that registers Sh...SharePoint Taxonomy Extensions: SharePoint Taxonomy Extensions 1.2: Make Taxonomy Extensions useable in every list type. Not only in document libraries.SharePoint Video Player Web Part & SharePoint Video Library: Version 3.0.0: Absolutely killer feature - installing multiple players on a page without any loss of performance.SilverLight Interface for Mapserver: SLMapViewer v. 1.0: SLMapviewer sample application version 1.0. This new release includes the following enhancements: Silverlight 3.0 native Added a new init parame...Spark View Engine: Spark v1.1: Changes since RC1Built against ASP.NET MVC 2 RTMSPSS .NET interop library: 2.0: This new version supports SPSS 15, and includes spssio32.dll and other native .dll dependencies so that it works out of the box without SPSS being ...stefvanhooijdonk.com: SharePoint2010.ProfilePicturesLoader: So, with the help of Reflector, I wrote a small tool that would import all our profile pictures and update the user profiles. http://wp.me/pMnlQ-6G SuperviseObjects: SuperviseObjects 1.0: First releaseTortoiseSVN Addin for Visual Studio: TortoiseSVN Addin 1.0.5: Feature: Visual Studio/svn action synchronization on Item in Solution explorer like add, move, delete and rename. Note: Move action does not rememb...VCC: Latest build, v2.1.30311.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.0.4: Business Management ■This release fixes a Could not load type error on the main view of the module. Groups ■Group requests were failing in some i...WikiPlex – a Regex Wiki Engine: WikiPlex 1.3: Info: Official Version: 1.3.0.215 | Full Release Notes Documentation - This new documentation includes Full Markup Guide with Examples Articles ...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks: Initial beta release of Zealand IT MSBuild Tasks. Contains the following tasks: RunAs - Same as Exec task, but provides parameters for impersonat...ZoomBarPlus: V1 (Beta): This is the initial release. It should be considered a beta test version as it has not been tested for very long on my device.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryFarseer Physics EngineCaliburn: An Application Framework for WPF and SilverlightSharePoint Team-Mailer

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • ICC Cricket World Cup 2011- Free Online Live Streaming, Mobile Apps, TV and Radio Guide

    - by Kavitha
    The ICC Cricket World Cup 2011 will be hosted jointly by Bangladesh, India and Sri Lanka. This 10th edition of World Cup is held between 19 February-2 April 2011. The World Cup drive will be starting in Dhaka on 19 February with the inaugural match between India and Bangladesh. The 43 days long ICC World Cup Cricket 2011 event will host 49 matches, day matches starting as early as 9.30am IST and day-night matches starting at 2.30pm IST. Here is our guide to follow 2011 ICC Cricket World Cup live on your computers, televisions,mobiles and radios Free Live Streaming On The Web (Official & Unofficial) http://espnstar.com will live stream all the matches of World Cup 2011 and they will be available in HD quality as they are the official broadcasters of World Cup 2011 cricket event. This is the first time ever a world cup cricket event is streamed online officially. If you are not able to access the official live streaming of Cricket World Cup due to regional restrictions, point your browser to any of the following unofficial live streams on the web. NOTE: MAKE SURE THAT YOUR ANTIVIRUS and ANTIMALWARE software are up and running before opening any of these sites. crictime.com - this site offers 6 live streaming servers that offer World Cup 2011 Cricket matches streams. Don’t mind the ads that are displayed left,right and center and just enjoy the cricket. Web pages dedicated for the world cup streaming are already live and you can bookmark them for your reference. cricfire.com/live-cricket: cricfire   gathers cricket live streams available around the web and provides them for easy access. Also they provide links for watching highlights and other post match analysis shows. Other sites that provide live streaming videos extracover.net webcric.com Searching for Unofficial Streams On Live Video Streaming Sites One of the best ways to find the unofficial streams is look for live streaming feeds on popular video streaming websites. We can be assured that these sites does not spread malware and spammy ads as they are well established. Here are the queries that you can use to search the popular sites FreedoCast  http://freedocast.com/search.aspx?go=cricket%20world%20cup Justin.tv      http://www.justin.tv/search?q=cricket+world+cup Ustream.tv  http://www.ustream.tv/discovery/live/all?q=cricket%20world%20cup TV Channels That Telecast Cricket World Cup Live Even though web is the place where we spend most of our time for entertainment, TVs are still popular for watching sports events. Mostly 90% of us are going to follow this cricket world cup matches on television sets. Here is the list of TV channels that paid whooping amounts of money for broadcasting rights and going to telecast live cricket Afghanistan – Ariana Television Network: Lemar TV Australia – Nine Network, Fox Sports Bangladesh – Bangladesh Television Canada – Asian Television Network China – ESPN Star Sports Europe (Except UK & Ireland) – Eurosport2 Fiji – Fiji TV India – ESPN Star Sports, Star Cricket, DD National (mostly India matches alone) Ireland – Zee Cafe Jamaica – Television Jamaica Middle East – Arab Radio and Television Network Nepal – ESPN Star Sports New Zealand – Sky Sport Pacific Islands – Sky Pacific Pakistan – GEO Super, Pakistan Television Corporation Pan-Africa – South African Broadcasting Corporation Singapore – Star Cricket South Africa – Supersport, Sabc3 Sport Sri Lanka – Sri Lanka Rupavahini Corporation United Kingdom – Sky Sports HD USA – Willow Cricket, DirecTV, Dish Network West Indies – Caribbean Media Corporation Radio Stations That Provide Live Commentary Don’t we listen to radio? Yes we still listen to radios, especially when we are on the go. Radios are part of our mobiles as well as music players like iPods. Here are the stations that you can tune into for catching live cricket commentary Australia – ABC Local Radio Bangladesh – Bangladesh Betar Canada , Central America – EchoStar India – All India Radio Pakistan, United Arab Emirates – Hum FM Sri Lanka – FM Derana United Kingdom, Ireland – BBC Radio West Indies – Caribbean Media Corporation Watch World Cup Cricket On Your Mobile This section is for Indian users. 3G rollout is happening at very high pace in all part of the India and most of the metros and towns are able to access 3G services. With 3G on your mobile you will be able to watch live ICC world cricket on your Reliance Mobiles and you can read more about it here. Top 10 Cricket Websites Check out our earlier post on top 10 cricket web sites for information. This article titled,ICC Cricket World Cup 2011- Free Online Live Streaming, Mobile Apps, TV and Radio Guide, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >