Search Results

Search found 8177 results on 328 pages for 'lost focus'.

Page 311/328 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • Using resize to getScript if above x pixels (jQuery)

    - by user1065573
    So I have an issue with this script. It basically is loading the script on any resize event. I have it working if lets say... - User has window size above 768 (getScipt() and content by .load() in that script) - Script will load (and content) - User for some reason window size goes below 768 (css hides #div) - User re-sizes again above 768, And does not load the script again! (css displays the div) Great. Works right. Now.. (for some cray reason) - User has window size below 768 (Does NOT getScript() or nothing) - User re-sizes above 768 and stops at any px (getScript and content) - User re-sizes again anything above 768 (It getScript AGAIN) I need it to only get script once. Ive used some code from here jQuery: How to detect window width on the fly? (edit - i found this, which he has the same issue of loading a script once.) Problem with function within $(window).resize - using jQuery And others i lost the links to :/ When i first found this problem I was using something like this, then i added the allcheckWidth() to solve problem. But it does not. var $window = $(window); function checkWidth() { var windowsize = $window.width(); ////// LETS GET SOME THINGS IF IS DESKTOP var $desktop_load = 0; if (windowsize >= 768) { if (!$desktop_load) { // GET DESKTOP SCRIPT TO KEEP THINGS QUICK $.getScript("js/desktop.js", function() { $desktop_load = 1; }); } else { } } ////// LETS GET SOME THINGS IF IS MOBILE if (windowsize < 768) { } } checkWidth(); function allcheckWidth () { var windowsize = $window.width(); //// IF WINDOW SIZE LOADS < 768 THEN CHANGES >= 768 LOAD ONCE var $desktop_load = 0; if (!$desktop_load) { if (windowsize < 768) { if ( $(window).width() >= 768) { $.getScript("js/desktop.js", function() { $desktop_load = 1; }); } } } } $(window).resize(allcheckWidth); Now im using something like this which makes more sense? $(function() { $(window).resize(function() { var $desktop_load = 0; //Dekstop if (window.innerWidth >= 768) { if (!$desktop_load) { // GET DESKTOP SCRIPT TO KEEP THINGS QUICK $.getScript("js/desktop.js", function() { $desktop_load + 1; }); } else { } } if (window.innerWidth < 768) { if (window.innerWidth >= 768) { if (!$desktop_load) { // GET DESKTOP SCRIPT TO KEEP THINGS QUICK $.getScript("js/desktop.js", function() { $desktop_load + 1; }); } else { } } } }) .resize(); // trigger resize event }) Ty for future response. Edit - To give an example, in desktop.js i have a gif that loads before "abc" content, that gets inserted by .load(). On resize this gif will show up and the .load() in desktop.js will fire again. So if getScript was only being called once, it should not be doing anything again in desktop.js. Confusing?

    Read the article

  • how to change php functions send result to jquery ajax

    - by OpenCode
    I have many codes for user notifications, it do many mysql works, so it needs waiting times. jquery ajax works for php files. how can i use jquery for send php result to web page? current code : <? echo db_cache("main_top_naver_cache", 300, "naver_popular('naver_popular', 4)"))?> wanted code : but it shows errors... <div id='a'> <div id='b'> <script type="text/javascript"> $("#test1").html( " <? echo htmlspecialchars(db_cache("main_top_naver_cache", 300, "naver_popular('naver_popular', 4)"))?> " ); </script> IE debuger shows error ... SCRIPT1015:... <script type="text/javascript"> $("#test1").html( " &lt;style&gt; /* http://html.nhncorp.com/uio_factory/ui_pattern/list/3 */ .section_ol3{position:relative;border:1px solid #ddd;background:#fff;font-size:12px;font-family:Tahoma, Geneva, sans-serif;line-height:normal;*zoom:1} .section_ol3 a{color:#666;text-decoration:none} .section_ol3 a:hover, .section_ol3 a:active, .section_ol3 a:focus{text-decoration:underline} .section_ol3 em{font-style:normal} .section_ol3 h2{margin:0;padding:10px 0 8px 13px;border-bottom:1px solid #ddd;font-size:12px;color:#333} .section_ol3 h2 em{color:#cf3292} .section_ol3 ol{margin:13px;padding:0;list-style:none} .section_ol3 li{position:relative;margin:0 0 10px 0;*zoom:1} .section_ol3 li:after{display:block;clear:both;content:&quot;&quot;} .section_ol3 li .ranking{display:inline-block;width:14px;height:11px;margin:0 5px 0 0;border-top:1px solid #fff;border-bottom:1px solid #d1d1d1;background:#d1d1d1;text-align:center;vertical-align:top;font:bold 10px Tahoma;color:#fff} .section_ol3 li.best .ranking{border-bottom:1px solid #6e87a5;background:#6e87a5} .section_ol3 li.best a{color:#7189a7} .section_ol3 li .num{position:absolute;top:0;right:0;font-size:11px;color:#a8a8a8;white-space:nowrap} .section_ol3 li.best .num{font-weight:bold;color:#7189a7} .section_ol3 .more{position:absolute;top:10px;right:13px;font:11px Dotum, ??;text-decoration:none !important} .section_ol3 .more span{margin:0 2px 0 0;font-weight:bold;font-size:16px;color:#d76ea9;vertical-align:middle} &lt;/style&gt; &lt;div class=&quot;section_ol3&quot;&gt; &lt;ol style='text-align:left;'&gt; &lt;li class='best'&gt;&lt;span class='ranking'&gt;1&lt;/span&gt;&lt;a href='http://search.naver.com/search.naver?where=nexearch&amp;query=%B9%AB%C7%D1%B5%B5%C0%FC&amp;sm=top_lve' onfocus='this.blur()' title='????' target=new&gt;????&lt;/a&gt;&lt;span class='num'&gt;+42&lt;/span&gt;&lt;/li&gt;&lt;li class='best'&gt;&lt;span class='ranking'&gt;2&lt;/span&gt;&lt;a href='http://search.naver.com/search.naver?where=nexearch&amp;query=%B1%E8%C0%E7%BF%AC&amp;sm=top_lve' onfocus='this.blur()' title='???' target=new&gt;???&lt;/a&gt;&lt;span class='num'&gt;+123&lt;/span&gt;&lt;/li&gt;&lt;li class='best'&gt;&lt;span class='ranking'&gt;3&lt;/span&gt;&lt;a href='http://search.naver.com/search.naver?where=nexearch&amp;query=%C0%CC%C7%CF%C0%CC&amp;sm=top_lve' onfocus='this.blur()' title='???' target=new&gt;???&lt;/a&gt;&lt;span class='num'&gt;+90&lt;/span&gt;&lt;/li&gt;&lt;li &gt;&lt;span class='ranking'&gt;4&lt;/span&gt;&lt;a href='http://search.naver.com/search.naver?where=nexearch&amp;query=%BA%D2%C8%C4%C0%C7%B8%ED%B0%EE2&amp;sm=top_lve' onfocus='this.blur()' title='?????2' target=new&gt;?????2&lt;/a&gt;&lt;span class='num'&gt;+87&lt;/span&gt;&lt;/li&gt; &lt;/ol&gt; </div> " );

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • scroll position for div on timer tick event

    - by MeqDotNet
    I have a chat box that refresh every 7 seconds by a timer the problem is that the div is scrolling top each time the timer I have tried to add the following Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "focus", "document.getElementById('divMessages').scrollTop = document.getElementById('divMessages').style.height;", true); to the timer tick but it doesn't work ticks here is my code <table border="0" cellpadding="0" cellspacing="0"> <tr> <td style="width: 500px;"> <div id="divMessages" style="background-color: White; border-color:Black;border-width:1px;border-style:solid;height:300px;width:592px;overflow-y:scroll; font-size: 11px; padding: 4px 4px 4px 4px;" onload="SetScrollPosition();" onresize="SetScrollPosition()"> <asp:Literal Id="litMessages" runat="server" /> </div> </td> <td>&nbsp;</td> <td> <div id="divUsers" style="background-color: White; border-color:Black;border-width:1px;border-style:solid;height:300px;width:150px;overflow-y:scroll; font-size: 11px; padding: 4px 4px 4px 4px;"> <asp:Literal Id="litUsers" runat="server" /> </div> </td> </tr> <tr> <td colspan="3"> <asp:TextBox Id="txtMessage" onkeyup="ReplaceChars()" onfocus="SetToEnd(this)" runat="server" MaxLength="100" Width="500px" /> <asp:Button Id="btnSend" runat="server" Text="Send" OnClientClick="SetScrollPosition()" OnClick="BtnSend_Click" /> &nbsp; <b>Color:</b> <asp:DropDownList Id="ddlColor" runat="server"> <asp:ListItem Value="Black" Selected="true">Black</asp:ListItem> <asp:ListItem Value="Blue">Blue</asp:ListItem> <asp:ListItem Value="Navy">Navy</asp:ListItem> <asp:ListItem Value="Red">Red</asp:ListItem> <asp:ListItem Value="Orange">Orange</asp:ListItem> <asp:ListItem Value="#666666">Gray</asp:ListItem> <asp:ListItem Value="Green">Green</asp:ListItem> <asp:ListItem Value="#FF00FF">Pink</asp:ListItem> </asp:DropDownList> &nbsp; <asp:Button Id="btnLogOut" Text="Log Out" runat="server" OnClick="BtnLogOut_Click" /> </td> </tr> </table> <script type="text/javascript"> var div = document.getElementById('divMessages'); div.scrollTop = div.style.height; </script> </ContentTemplate> </asp:UpdatePanel>

    Read the article

  • How to select chosen columns from two different entities into one DTO using NHibernate?

    - by Pawel Krakowiak
    I have two classes (just recreating the problem): public class User { public virtual int Id { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual IList<OrgUnitMembership> OrgUnitMemberships { get; set; } } public class OrgUnitMembership { public virtual int UserId { get; set; } public virtual int OrgUnitId { get; set; } public virtual DateTime JoinDate { get; set; } public virtual DateTime LeaveDate { get; set; } } There's a Fluent NHibernate map for both, of course: public class UserMapping : ClassMap<User> { public UserMapping() { Table("Users"); Id(e => e.Id).GeneratedBy.Identity(); Map(e => e.FirstName); Map(e => e.LastName); HasMany(x => x.OrgUnitMemberships) .KeyColumn(TypeReflector<OrgUnitMembership> .GetPropertyName(p => p.UserId))).ReadOnly().Inverse(); } } public class OrgUnitMembershipMapping : ClassMap<OrgUnitMembership> { public OrgUnitMembershipMapping() { Table("OrgUnitMembership"); CompositeId() .KeyProperty(x=>x.UserId) .KeyProperty(x=>x.OrgUnitId); Map(x => x.JoinDate); Map(x => x.LeaveDate); References(oum => oum.OrgUnit) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.OrgUnitId)).ReadOnly(); References(oum => oum.User) .Column(TypeReflector<OrgUnitMembership> .GetPropertyName(oum => oum.UserId)).ReadOnly(); } } What I want to do is to retrieve some users based on criteria, but I would like to combine all columns from the Users table with some columns from the OrgUnitMemberships table, analogous to a SQL query: select u.*, m.JoinDate, m.LeaveDate from Users u inner join OrgUnitMemberships m on u.Id = m.UserId where m.OrgUnitId = :ouid I am totally lost, I tried many different options. Using a plain SQL query almost works, but because there are some nullable enums in the User class AliasToBean fails to transform, otherwise wrapping a SQL query would work like this: return Session .CreateSQLQuery(sql) .SetParameter("ouid", orgUnitId) .SetResultTransformer(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>() I tried the code below as a test (a few different variants), but I'm not sure what I'm doing. It works partially, I get instances of UserDTO back, the properties coming from OrgUnitMembership (dates) are filled, but all properties from User are null: User user = null; OrgUnitMembership membership = null; UserDTO dto = null; var users = Session.QueryOver(() => user) .SelectList(list => list .Select(() => user.Id) .Select(() => user.FirstName) .Select(() => user.LastName)) .JoinAlias(u => u.OrgUnitMemberships, () => membership) //.JoinQueryOver<OrgUnitMembership>(u => u.OrgUnitMemberships) .SelectList(list => list .Select(() => membership.JoinDate).WithAlias(() => dto.JoinDate) .Select(() => membership.LeaveDate).WithAlias(() => dto.LeaveDate)) .TransformUsing(Transformers.AliasToBean<UserDTO>()) .List<UserDTO>();

    Read the article

  • Flash Mouse.hide() cmd+tab(alt+tab) bring the mouse back

    - by DickieBoy
    Having a bit of trouble with Mouse.show() and losing focus of the swf This isn't my demo: but its showing the same bug http://www.foundation-flash.com/tutorials/as3customcursors/ What i do to recreate it is: mouse over the swf, hit cmd+tab to highlight another window, result is that the mouse is not brought back and is still invisible, (to get it back go to the window bar at the top of the screen and click something). I have an area in which movement is detected and an image Things I have tried package { import flash.display.Sprite; import flash.display.MovieClip; import com.greensock.*; import flash.events.*; import flash.utils.Timer; import flash.events.TimerEvent; import flash.utils.*; import flash.ui.Mouse; public class mousey_movey extends MovieClip { public var middle_of_the_flash; //pixels per second public var speeds = [0,1,3,5,10]; public var speed; public var percentage_the_mouse_is_across_the_screen; public var mouse_over_scrollable_area:Boolean; public var image_move_interval; public function mousey_movey() { middle_of_the_flash = stage.stageWidth/2; hot_area_for_movement.addEventListener(MouseEvent.MOUSE_OVER, mouseEnter); hot_area_for_movement.addEventListener(MouseEvent.MOUSE_OUT, mouseLeave); hot_area_for_movement.addEventListener(MouseEvent.MOUSE_MOVE, mouseMove); stage.addEventListener(Event.MOUSE_LEAVE, show_mouse); stage.addEventListener(MouseEvent.MOUSE_OUT, show_mouse); stage.addEventListener(Event.DEACTIVATE,show_mouse); hot_area_for_movement.alpha=0; hot_area_for_movement.x=0; hot_area_for_movement.y=34; } public function show_mouse(e) { trace(e.type) trace('show_mouse') Mouse.show(); } public function onActivate(e) { trace('activate'); Mouse.show(); } public function onDeactivate(e) { trace('deactivate'); } public function get_speed(percantage_from_middle):int { if(percantage_from_middle > 80) { return speeds[4] } else { if(percantage_from_middle > 60) { return speeds[3] } else { if(percantage_from_middle > 40) { return speeds[2] } else { if(percantage_from_middle > 20) { return speeds[1] } else { return 0; } } } } } public function mouseLeave(e:Event):void{ Mouse.show(); clearInterval(image_move_interval); } public function mouseEnter(e:Event):void{ Mouse.hide(); image_move_interval = setInterval(moveImage,1); } public function mouseMove(e:Event):void { percentage_the_mouse_is_across_the_screen = Math.round(((middle_of_the_flash-stage.mouseX)/middle_of_the_flash)*100); speed = get_speed(Math.abs(percentage_the_mouse_is_across_the_screen)); airplane_icon.x = stage.mouseX; airplane_icon.y = stage.mouseY; } public function stageMouseMove(e:Event):void{ Mouse.show(); } public function moveImage():void { if(percentage_the_mouse_is_across_the_screen > 0) { moving_image.x+=speed; airplane_icon.scaleX = -1; } else { moving_image.x-=speed; airplane_icon.scaleX = 1; } } } } Nothing too fancy, im just scrolling an image left of right at a speed which is generated by how far you are from the middle of the stage, and making an airplane moveclip follow the mouse. The events: stage.addEventListener(Event.MOUSE_LEAVE, show_mouse); stage.addEventListener(MouseEvent.MOUSE_OUT, show_mouse); stage.addEventListener(Event.DEACTIVATE,show_mouse); All fire and work correctly when in the browser, seem a little buggy when running a test through flash, was expecting this as ive experienced it before. The deactivate call even runs when testing and cmd+tabbing but shows no mouse. Any help on the matter is appreciated Thanks, Dickie

    Read the article

  • XSLT split text and preserve HTML tags

    - by Lycaon
    I have an xml that has a description node: <config> <desc>A <b>first</b> sentence here. The second sentence with some link <a href="myurl">The link</a>. The <u>third</u> one.</desc> </config> I am trying to split the sentences using dot as separator but keeping in the same time in the HTML output the eventual HTML tags. What I have so far is a template that splits the description but the HTML tags are lost in the output due to the normalize-space and substring-before functions. My current template is given below: <xsl:template name="output-tokens"> <xsl:param name="sourceText" /> <!-- Force a . at the end --> <xsl:variable name="newlist" select="concat(normalize-space($sourceText), ' ')" /> <!-- Check if we have really a point at the end --> <xsl:choose> <xsl:when test ="contains($newlist, '.')"> <!-- Find the first . in the string --> <xsl:variable name="first" select="substring-before($newlist, '.')" /> <!-- Get the remaining text --> <xsl:variable name="remaining" select="substring-after($newlist, '.')" /> <!-- Check if our string is not in fact a . or an empty string --> <xsl:if test="normalize-space($first)!='.' and normalize-space($first)!=''"> <p><xsl:value-of select="normalize-space($first)" />.</p> </xsl:if> <!-- Recursively apply the template for the remaining text --> <xsl:if test="$remaining"> <xsl:call-template name="output-tokens"> <xsl:with-param name="sourceText" select="$remaining" /> </xsl:call-template> </xsl:if> </xsl:when> <!--If no . was found --> <xsl:otherwise> <p> <!-- If the string does not contains a . then display the text but avoid displaying empty strings --> <xsl:if test="normalize-space($sourceText)!=''"> <xsl:value-of select="normalize-space($sourceText)" />. </xsl:if> </p> </xsl:otherwise> </xsl:choose> </xsl:template> and I am using it in the following manner: <xsl:template match="config"> <xsl:call-template name="output-tokens"> <xsl:with-param name="sourceText" select="desc" /> </xsl:call-template> </xsl:template> The expected output is: <p>A <b>first</b> sentence here.</p> <p>The second sentence with some link <a href="myurl">The link</a>.</p> <p>The <u>third</u> one.</p>

    Read the article

  • What is the best free or low-cost Java reporting library (e.g. BIRT, JasperReports, etc.) for making

    - by Max3000
    I want to print, email and write to PDF very simple reports. The reports are basically a list of items, divided in various sections/columns. The sections are not necessarily identical. Think newspaper. I just wasted a solid 2 days of work trying to make this kind of reports using JasperReports. I find that Jasper is great for outputing "normalized" data. The kind that would come out of a database for instance, each row neatly describing an item and each item printed on a line. I'm simplifying a bit but that's the idea. However, given what I want to do I always ended up completely lost. Data not being displayed for no apparent reason, columns of texts never the correct size, column positioning always ending up incorrect, pagination not sanely possible (I was never able to figure it out; the FAQ gives an obscure workaround), etc. I came to the conclusion that Jasper is really not built to make the kind of reports I want. Am I missing something? I'm ready to pay for a tool, as long as the price is reasonable. By reasonable I mean a few $100s. Thanks. EDIT: To answer cetus, here is more information about the report I made in Jasper. What I want is something like this: text text text text ------------------- text | text text |---------- text | text text | text --------| text text |---------- text | text What I made in jasper is this: (detail band) subreport | subreport ------------------------------------ subreport | subreport ------------------------------------ subreport | subreport The subreports are all the same actual report. This report has one field (called "field") and basically just prints this field in a detail band. Hence, running a single subreport simply lists all items from the datasource. The datasource itself is a simple custom JRDatasource containing a collection of strings in the field "field". The datasource iterates over the collection until there are no more strings. Each subreport has its own datasource. I tried many different variations of the above, with all sorts of different properties for the report, subreports, etc. IMO, this is fairly simple stuff. However, the problems I encounter are as follows: Subreports starting from the 3rd don't show up when their position type is 'float'. They do show up when they have 'fix relative to top'. However, I don't want to do this because the first two subreports can be of any length. I can't make each subreport to stretch according to its own length. Instead, they either don't stretch at all (which is not desirable because they have different lenghts) or they stretch according to the longest subreport. This makes a weird layout for sure. Pagination doesn't happen. If some subreports fall outside the page, they simple don't show. One alternative is to increase the 'page height' considerably and the 'detail band height' accordingly. However, in this case it is not really possibly to know the total height in advance. So I'm stuck with calculating/guessing it myself, before the report is even generated. More importantly, long reports end up on one page and this is not acceptable (the printout text is too small, it's ugly/non-professional to have different reports with different PDF page lengths, etc.). BTW, I used iReport so it's possibly limitations of iReport I'm listing here and not of Jasper itself. That's one of the things I'm trying to find out asking this question here. One alternative would be to generate the jrxml myself with just static text but I'm afraid I'll encounter the very same limitations. Anyway, I just generally wasted so much time getting anything done with Jasper that I can't help thinking its not the right tool for the job. (Not to say that Jasper doesn't excel in what it's good at).

    Read the article

  • Big o notation runtime

    - by mrblippy
    Hi, i have been given some code to work out big o runtimes on them, could someone tell me if i am on the right track or not?? //program1 int i, count = 0, n = 20000; for(i = 0; i < n * n; i++) { count++; } Is that O(n^2)??? //number2 int i, inner_count = 0, n = 2000000000; for(i = 0; i < n; i++) { inner_count++; } is this one O(n)???? //number3 for(i = 0; i < n; i++) { for(j = 0; j < n; j++) { count++; } } O(n^2)????? //number4 for(i = 0; i < n; i++) { for(j = 0; j < i; j++) { for(k = 0; k < j; k++) { inner_count++; } } } is that O(n^3)????? //number5 int i, j, inner_count = 0, n = 30000; for(i = 0; i < n; i++) { for(j = 0; j < i; j++) { inner_count++; } } is that one O(n^3)? //number6 int i, j, k, l, pseudo_inner_count = 0, n = 25; for(i = 0; i < n; i++) { for(j = 0; j < i*i; j++) { for(k = 0; k < i*j; k++) { pseudo_inner_count++; for(l = 0; l < 10; l++); } } } very confused about this one O(n^3)?? //number7 int i, j, k, pseudo_inner_count = 0, n = 16; for(i = n; i > 1; i /= 2) { for(j = 0; j < n; j++) { pseudo_inner_count++; for(k = 0; k < 50000000; k++); } } o(n)???? (i get more lost as they get harder) //number8 int i, j, pseudo_inner_count = 0, n = 1073741824; for(i = n; i > 1; i /= 2) { pseudo_inner_count++; for(j = 0; j < 50000000; j++); } O(n^2)??? If anyone could clarify these and help me understand them better i would be very grateful -cheers

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • PHP Form - Empty input enter this text - Validation

    - by James Skelton
    No doubt very simple question for someone with php knowledge. I have a form with a datepicker, all is fine when a user has selected a date the email is send with: Date: 2012 04 10 But i would like if the user has skipped this and left blank (as i have not made this required) to send as: Date: Not Entered (<-- Or something) Instead at the minute of course it reads: Date: Form input <input type="text" class="form-control" id="datepicker" name="datepicker" size="50" value="Date Of Wedding" /> This is the validator $(document).ready(function(){ //validation contact form $('#submit').click(function(event){ event.preventDefault(); var fname = $('#name').val(); var validInput = new RegExp(/^[a-zA-Z0-9\s]+$/); var email = $('#email').val(); var validEmail = new RegExp(/^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/); var message = $('#message').val(); if(fname==''){ showError('<div class="alert alert-danger">Please enter your name.</div>', $('#name')); $('#name').addClass('required'); return;} if(!validInput.test(fname)){ showError('<div class="alert alert-danger">Please enter a valid name.</div>', $('#name')); $('#name').addClass('required'); return;} if(email==''){ showError('<div class="alert alert-danger">Please enter an email address.</div>', $('#email')); $('#email').addClass('required'); return;} if(!validEmail.test(email)){ showError('<div class="alert alert-danger">Please enter a valid email.</div>', $('#email')); $('#email').addClass('required'); return;} if(message==''){ showError('<div class="alert alert-danger">Please enter a message.</div>', $('#message')); $('#message').addClass('required'); return;} // setup some local variables var request; var form = $(this).closest('form'); // serialize the data in the form var serializedData = form.serialize(); // fire off the request to /contact.php request = $.ajax({ url: "contact.php", type: "post", data: serializedData }); // callback handler that will be called on success request.done(function (response, textStatus, jqXHR){ $('.contactWrap').show( 'slow' ).fadeIn("slow").html(' <div class="alert alert-success centered"><h3>Thank you! Your message has been sent.</h3></div> '); }); // callback handler that will be called on failure request.fail(function (jqXHR, textStatus, errorThrown){ // log the error to the console console.error( "The following error occured: "+ textStatus, errorThrown ); }); }); //remove 'required' class and hide error $('input, textarea').keyup( function(event){ if($(this).hasClass('required')){ $(this).removeClass('required'); $('.error').hide("slow").fadeOut("slow"); } }); // show error showError = function (error, target){ $('.error').removeClass('hidden').show("slow").fadeIn("slow").html(error); $('.error').data('target', target); $(target).focus(); console.log(target); console.log(error); return; } });

    Read the article

  • Please clarify how create/update happens against child entities of an aggregate root

    - by christian
    After much reading and thinking as I begin to get my head wrapped around DDD, I am a bit confused about the best practices for dealing with complex hierarchies under an aggregate root. I think this is a FAQ but after reading countless examples and discussions, no one is quite talking about the issue I'm seeing. If I am aligned with the DDD thinking, entities below the aggregate root should be immutable. This is the crux of my trouble, so if that isn't correct, that is why I'm lost. Here is a fabricated example...hope it holds enough water to discuss. Consider an automobile insurance policy (I'm not in insurance, but this matches the language I hear when on the phone w/ my insurance company). Policy is clearly an entity. Within the policy, let's say we have Auto. Auto, for the sake of this example, only exists within a policy (maybe you could transfer an Auto to another policy, so this is potential for an aggregate as well, which changes Policy...but assume it simpler than that for now). Since an Auto cannot exist without a Policy, I think it should be an Entity but not a root. So Policy in this case is an aggregate root. Now, to create a Policy, let's assume it has to have at least one auto. This is where I get frustrated. Assume Auto is fairly complex, including many fields and maybe a child for where it is garaged (a Location). If I understand correctly, a "create Policy" constructor/factory would have to take as input an Auto or be restricted via a builder to not be created without this Auto. And the Auto's creation, since it is an entity, can't be done beforehand (because it is immutable? maybe this is just an incorrect interpretation). So you don't get to say new Auto and then setX, setY, add(Z). If Auto is more than somewhat trivial, you end up having to build a huge hierarchy of builders and such to try to manage creating an Auto within the context of the Policy. One more twist to this is later, after the Policy is created and one wishes to add another Auto...or update an existing Auto. Clearly, the Policy controls this...fine...but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?). Examples say things like Policy.addAuto(VIN, make, model, etc.) but are all so simple that that looks reasonable. But if this factory method approach falls apart with too many parameters (the entire Auto interface, conceivably) I need a solution. From that point in my thinking, I'm realizing that having a transient reference to an entity is OK. So, maybe it is fine to have a entity created outside of its parent within the aggregate in a transient environment, so maybe it is OK to say something like: auto = AutoFactory.createAuto(); auto.setX auto.setY or if sticking to immutability, AutoBuilder.new().setX().setY().build() and then have it get sorted out when you say Policy.addAuto(auto) This insurance example gets more interesting if you add Events, such as an Accident with its PolicyReports or RepairEstimates...some value objects but most entities that are all really meaningless outside the policy...at least for my simple example. The lifecycle of Policy with its growing hierarchy over time seems the fundamental picture I must draw before really starting to dig in...and it is more the factory concept or how the child entities get built/attached to an aggregate root that I haven't seen a solid example of. I think I'm close. Hope this is clear and not just a repeat FAQ that has answers all over the place.

    Read the article

  • Programatically add UserControl with events

    - by schaermu
    Hi everybody I need to add multiple user controls to a panel for further editing of the contained data. My user control contains some panels, dropdown lists and input elements, which are populated in the user control's Page_Load event. protected void Page_Load(object sender, EventArgs e) { // populate comparer ddl from enum string[] enumNames = Enum.GetNames(typeof (SearchComparision)); var al = new ArrayList(); for (int i = 0; i < enumNames.Length; i++) al.Add(new {Value = i, Name = enumNames[i]}); scOperatorSelection.DataValueField = "Value"; scOperatorSelection.DataTextField = "Name"; ... The data to be displayed is added to the user control as a Field, defined above Page_Load. The signature of the events is the following: public delegate void ControlStateChanged(object sender, SearchCriteriaEventArgs eventArgs); public event ControlStateChanged ItemUpdated; public event ControlStateChanged ItemRemoved; public event ControlStateChanged ItemAdded; The update button on the user control triggers the following method: protected void UpdateCriteria(object sender, EventArgs e) { var searchCritCtl = (SearchCriteria) sender; var scEArgs = new SearchCriteriaEventArgs { TargetCriteria = searchCritCtl.CurrentCriteria.CriteriaId, SearchComparision = ParseCurrentComparer(searchCritCtl.scOperatorSelection.SelectedValue), SearchField = searchCritCtl.scFieldSelection.SelectedValue, SearchValue = searchCritCtl.scFilterValue.Text, ClickTarget = SearchCriteriaClickTarget.Update }; if (ItemUpdated != null) ItemUpdated(this, scEArgs); } The rendering page fetches the data objects from a storage backend and displays it in it's Page_Load event. This is the point where it starts getting tricky: i connect to the custom events! int idIt = 0; foreach (var item in _currentSearch.Items) { SearchCriteria sc = (SearchCriteria)LoadControl("~/content/controls/SearchCriteria.ascx"); sc.ID = "scDispCtl_" + idIt; sc.ControlMode = SearchCriteriaMode.Display; sc.CurrentCriteria = item; sc.ItemUpdated += CriteriaUpdated; sc.ItemRemoved += CriteriaRemoved; pnlDisplayCrit.Controls.Add(sc); idIt++; } When first rendering the page, everything is displayed fine, i get all my data. When i trigger an update event, the user control event is fired correctly, but all fields and controls of the user control are NULL. After a bit of research, i had to come to the conclusion that the event is fired before the controls are initialized... Is there any way to prevent such behavior / to override the page lifecycle somehow? I cannot initialize the user controls in the page's Init-event, because i have to access the Session-Store (not initialized in Page_Init). Any advice is welcome... EDIT: Since we hold all criteria informations in the storage backend (including the count of criteria) and that store uses the userid from the session, we cannot use Page_Init... just for clarification EDIT #2: I managed to get past some of the problems. Since i'm now using simple types, im able to bind all the data declaratively (using a repeater with a simple ItemTemplate). It is bound to the control, they are rendered in correct fashion. On Postback, all the data is rebound to the user control, data is available in the OnDataBinding and OnLoad events, everything looks fine. But as soon it enters the real event (bound to the button control of the user control), all field values are lost somehow... Does anybody know, how the page lifecycle continues to process the request after Databinding/Loading ? I'm going crazy about this issue...

    Read the article

  • Doubly linked lists

    - by user1642677
    I have an assignment that I am terribly lost on involving doubly linked lists (note, we are supposed to create it from scratch, not using built-in API's). The program is supposed to keep track of credit cards basically. My professor wants us to use doubly-linked lists to accomplish this. The problem is, the book does not go into detail on the subject (doesn't even show pseudo code involving doubly linked lists), it merely describes what a doubly linked list is and then talks with pictures and no code in a small paragraph. But anyway, I'm done complaining. I understand perfectly well how to create a node class and how it works. The problem is how do I use the nodes to create the list? Here is what I have so far. public class CardInfo { private String name; private String cardVendor; private String dateOpened; private double lastBalance; private int accountStatus; private final int MAX_NAME_LENGTH = 25; private final int MAX_VENDOR_LENGTH = 15; CardInfo() { } CardInfo(String n, String v, String d, double b, int s) { setName(n); setCardVendor(v); setDateOpened(d); setLastBalance(b); setAccountStatus(s); } public String getName() { return name; } public String getCardVendor() { return cardVendor; } public String getDateOpened() { return dateOpened; } public double getLastBalance() { return lastBalance; } public int getAccountStatus() { return accountStatus; } public void setName(String n) { if (n.length() > MAX_NAME_LENGTH) throw new IllegalArgumentException("Too Many Characters"); else name = n; } public void setCardVendor(String v) { if (v.length() > MAX_VENDOR_LENGTH) throw new IllegalArgumentException("Too Many Characters"); else cardVendor = v; } public void setDateOpened(String d) { dateOpened = d; } public void setLastBalance(double b) { lastBalance = b; } public void setAccountStatus(int s) { accountStatus = s; } public String toString() { return String.format("%-25s %-15s $%-s %-s %-s", name, cardVendor, lastBalance, dateOpened, accountStatus); } } public class CardInfoNode { CardInfo thisCard; CardInfoNode next; CardInfoNode prev; CardInfoNode() { } public void setCardInfo(CardInfo info) { thisCard.setName(info.getName()); thisCard.setCardVendor(info.getCardVendor()); thisCard.setLastBalance(info.getLastBalance()); thisCard.setDateOpened(info.getDateOpened()); thisCard.setAccountStatus(info.getAccountStatus()); } public CardInfo getInfo() { return thisCard; } public void setNext(CardInfoNode node) { next = node; } public void setPrev(CardInfoNode node) { prev = node; } public CardInfoNode getNext() { return next; } public CardInfoNode getPrev() { return prev; } } public class CardList { CardInfoNode head; CardInfoNode current; CardInfoNode tail; CardList() { head = current = tail = null; } public void insertCardInfo(CardInfo info) { if(head == null) { head = new CardInfoNode(); head.setCardInfo(info); head.setNext(tail); tail.setPrev(node) // here lies the problem. tail must be set to something // to make it doubly-linked. but tail is null since it's // and end point of the list. } } } Here is the assignment itself if it helps to clarify what is required and more importantly, the parts I'm not understanding. Thanks https://docs.google.com/open?id=0B3vVwsO0eQRaQlRSZG95eXlPcVE

    Read the article

  • parsing xml with php, children

    - by moustafa
    Hello I successfully created my parser Everything is working great except one thing since my xml is formated a little different and I am totally lost on how to assign variable to the children of . xml portion <item> <url /> <name /> - <photos> <photo>1020944_0.jpg</photo> <photo>1020944_1.jpg</photo> <photo>1020944_2.jpg</photo> </photos> <user_id /> </item> PHP code <? global $insideitem, $tag, $name, $photos, $user_id; global $count,$db; $db = mysql_connect("localhost", "user","pass"); mysql_select_db("db_name",$db); $result = mysql_query("SELECT user_id FROM table,$db); while ($myrow = mysql_fetch_array($result)){ $uid=$myrow['user_id']; $UN_ID[$uid]=$uid; } $count=1; $count2=1; // ########################################################## // ************* START ELEMENT FUNCTION ********************* // ########################################################## function startElement($parser, $name, $attrs) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { $tag = $name; } elseif($name == "ITEM"){ $insideitem = true; } } function endElement($parser, $name) { global $insideitem, $tag, $name, $photos, $user_id; global $count,$count2,$db,$UN_ID; if ($name == "ITEM") { if(!$UN_ID[$unique_id]){ $name=addslashes($name); $photo1=addslashes($photo); $photo2=addslashes($photo); $photo3=addslashes($photo); $photo4=addslashes($photo); $user_id=addslashes($category); $sql = "INSERT INTO table ( name, photo1, photo2, photo3, photo4, user_id ) VALUES ( '$name', '$photo', '$photo', '$photo', '$photo', '$user_id', )"; $resultupdate = mysql_query($sql); } $name=''; $photos=''; $user_id=''; } } function characterData($parser, $data) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { switch ($tag) { case "NAME": $name .= $data; break; case "PHOTOS": $photos .= $data; break; case "USER_ID": $user_id .= $data; break; } } } $xml_parser = xml_parser_create(); xml_set_element_handler($xml_parser, "startElement", "endElement"); xml_set_character_data_handler($xml_parser, "characterData"); $fp = fopen("../myfile.xml","r") or die("Error reading RSS data."); while ($data = fread($fp, 4096)) // Parse each 4KB chunk with the XML parser created above xml_parse($xml_parser, $data, feof($fp)) // Handle errors in parsing or die(sprintf("XML error: %s at line %d", xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser))); fclose($fp); // ########################################################## // *********************** FREE MEMORY ********************** // ########################################################## xml_parser_free($xml_parser); ?> The number of tags can range between 1-4. I have tried searching everywhere for info on how to do this and tried everything but I just cant get it. After several days of this giving me headaches I really hope some one can enlighten me.

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

  • CodePlex Daily Summary for Saturday, March 06, 2010

    CodePlex Daily Summary for Saturday, March 06, 2010New ProjectsAgr.CQRS: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. BigDays 2010: Big>Days 2010BizTalk - Controlled Admin: Hi .NET folks, I am planning to start project on a Controlled BizTalk Admin tool. This tool will be useful for the organizations which have "Sh...Blacklist of Providers: Blacklist of Providers - the application for department of warehouse logistics (warehouse) at firms.Career Vector: A job board software.Chargify Demo: This is a sample website for ChargifyConceptual: Concept description and animationEric Hexter: My publicly available source code and examplesFluentNHibernate.Search: A Fluent NHibernate.Search mapping interface for NHibernate provider implementation of Lucene.NET.FreelancePlanner: FreelancePlanner is a project tracking tool for freelance translators.HTMLx - JavaScript on the Server for .NET: HTMLx is a set of libraries based on ASP.NET engine to provide JavaScript programmability on the server side. It allows Web developers to use JavaS...IronMSBuild: IronMSBuild is a custom MSBuild Task, which allows you to execute IronRuby scripts. // have to provide some examples LINQ To Blippr: LINQ to Blippr is an open source LINQ Provider for the micro-reviewing service Blippr. LINQ to Blippr makes it easier and more efficent for develo...Luk@sh's HTML Parser: library that simplifies parsing of the HTML documents, for .NETMeta Choons: Unsure as yet but will be a kind of discogs type site but different..NetWork2: NetWork2Regular Expression Chooser: Simple gui for choosing the regular expressions that have become more than simple.See.Sharper: Hopefully useful C# extensions.SharePoint 2010 Toggle User Interface: Toggle the SharePoint 2010 user interface between the new SharePoint 2010 user interface and SharePoint 2007 user interface.Silverlight DiscussionBoard for SharePoint: This is a sharepoint 3.0 webpart that uses a silverlight treeview to display metadata about sharepoint discussions anduses the html bridge to show...Simple Sales Tracking CRM API Wrapper: The Simple Sales Tracking API Wrapper, enables easy extention development and integration with the hosted service at http://www.simplesalestracking...Syntax4Word: A syntax addin for word 2007.TortoiseHg installer builder: TortoiseHg and Mercurial installer builder for Windowsunbinder: Model un binding for route value dictionariesWindows Workflow Foundation on Codeplex: This site has previews of Workflow features which are released out of band for the purposes of adoption and feedback.XNA RSM Render State Manager: Render state management idea for XNA games. Enables isolation between draw calls whilst reducing DX9 SetRenderState calls to the minimum.New ReleasesAgr.CQRS: Sourcecode package: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. This dow...Book Cataloger: Preview 0.1.6a: New Features: Export to Word 2007 Bibliography format Dictionary list editors for Binding, Condition Improvements: Stability improved Content ...Braintree Client Library: Braintree-1.1.2: Includes minor enhancements to CreditCard and ValidationErrors to support upcoming example application.CassiniDev - Cassini 3.5 Developers Edition: CassiniDev v3.5.0.5: For usage see Readme.htm in download. New in CassiniDev v3.5.0.5 Reintroduced the Lib project and signed all Implemented the CassiniSqlFixture -...Composure: Calcium-64420-VS2010rc1.NET4.SL3: This is a simple conversion of Calcium (rev 64420) built in VS2010 RC1 against .NET4 and Silverlight 3. No source files were changed and ALL test...Composure: MS AJAX Library (46266) for VS2010 RC1 .NET4: This is a quick port of Microsoft's AJAX Library (rev 46266) for Visual Studio 2010 RC1 built against .NET 4.0. Since this conversion was thrown t...Composure: MS Web Test Lightweight for VS2010 RC1 .NET4: A simple conversion of Microsoft's Web Test Lightweight for Visual Studio 2010 RC1 .NET 4.0. This is part of a larger "special request" conversion...CoNatural Components: CoNatural Components 1.5: Supporting new data types: Added support for binary data types -> binary, varbinary, etc maps to byte[] Now supporting SQL Server 2008 new types ...Extensia: Extensia 2010-03-05: Extensia is a very large list of extension methods and a few helper types. Some extension methods are not practical (e.g. slow) whilst others are....Fluent Assertions: Fluent Assertions release 1.1: In this release, we've worked hard to add some important missing features that we really needed, and also improve resiliance against illegal argume...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.0 RC: Fluent Ribbon Control Suite 1.0 (Release Candidate)Includes: Fluent.dll (with .pdb and .xml, debug and release version) Showcase Application Sa...FluentNHibernate.Search: 0.1 Beta: First beta versionFolderSize: FolderSize.Win32.1.0.7.0: FolderSize.Win32.1.0.6.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Free Silverlight & WPF Chart Control - Visifire: Silverlight and WPF Step Line Chart: Hi, With this release Visifire introduces Step Line Chart. This release also contains fix for the following issues: * In WPF, if AnimatedUpd...Html to OpenXml: HtmlToOpenXml 1.0: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Line Counter: 1.5.1: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.1 Added outline icons and lin...Lokad Cloud - .NET O/C mapper (object to cloud) for Windows Azure: Lokad.Cloud v1.0.662.1: You can get the most recent release directly from the build server at http://build.lokad.com/distrib/Lokad.Cloud/Lost in Translation: LostInTranslation v0.2: Alpha release: function complete but not UX complete.MDownloader: MDownloader-0.15.7.56349: Supported large file resumption. Fixed minor bugs.Mini C# Lab: Mini CSharp Lab Ver 1.4: The primary new feature of Ver 1.4 is batch mode! Now you can run Mini C# Lab program as a scheduled task, no UI interactivity is needed. Here ar...Mobile Store: First drop: First droppatterns & practices SharePoint Guidance: SPG2010 Drop6: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Picasa Downloader: PicasaDownloader (41446): Changelog: Replaced some exception messages by a Summary dialog shown after downloading if there have been problems. Corrected the Portable vers...Pod Thrower: Version 1: This is the first release, I'm sure there are bugs, the tool is fully functional and I'm using it currently.PowerShell Provider BizTalk: BizTalkFactory PowerShell Provider - 1.1-snapshot: This release constitutes the latest development snapshot for the Provider. Please, leave feedback and use the Issue Tracker to help improve this pr...Resharper Settings Manager: RSM 1.2.1: This is a bug fix release. Changes Fixed plug-in crash when shared settings file was modified externally.Reusable Library Demo: Reusable Library Demo v1.0.2: A demonstration of reusable abstractions for enterprise application developerSharePoint 2010 Toggle User Interface: SharePoint Toggle User Interface: Release 1.0.0.0Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity(net3.5 MySQL) 1.0 Beta: MySQL VS 2008 EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowserTortoiseHg: TortoiseHg 1.0: http://bitbucket.org/tortoisehg/stable/wiki/ReleaseNotes Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before in...Visual Studio 2010 and Team Foundation Server 2010 VM Factory: Rangers Virtualization Guidance: Rangers Virtualization Guidance Focused guidance on creating a Rangers base image manually and introduction of PowerShell scripts to automate many ...Visual Studio DSite: Advanced Email Program (Visual Basic 2008): This email program can send email to any one using your email username and email credentials. The email program can also attatch attactments to you...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6: Several improvements and bug fixes have gone into the comment parsing code for the registers. The plug-in should now correctly pay attention to th...WSDLGenerator: WSDLGenerator 0.0.0.3: - Fixed SharePoint generated *.wsdl.aspx file - Added commandline option -wsdl which does only generate the wsdl file.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFluent AssertionsComposureDiffPlex - a .NET Diff Generator

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC

    - by shiju
     In this post, I will give an introduction to how to work on NoSQL and document database with MongoDB , NoRM and ASP.Net MVC 2. NoSQL and Document Database The NoSQL movement is getting big attention in this year and people are widely talking about document databases and NoSQL along with web application scalability. According to Wikipedia, "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases. These data stores may not require fixed table schemas, usually avoid join operations and typically scale horizontally. Academics and papers typically refer to these databases as structured storage". Document databases are schema free so that you can focus on the problem domain and don't have to worry about updating the schema when your domain is evolving. This enables truly a domain driven development. One key pain point of relational database is the synchronization of database schema with your domain entities when your domain is evolving.There are lots of NoSQL implementations are available and both CouchDB and MongoDB got my attention. While evaluating both CouchDB and MongoDB, I found that CouchDB can’t perform dynamic queries and later I picked MongoDB over CouchDB. There are many .Net drivers available for MongoDB document database. MongoDB MongoDB is an open source, scalable, high-performance, schema-free, document-oriented database written in the C++ programming language. It has been developed since October 2007 by 10gen. MongoDB stores your data as binary JSON (BSON) format . MongoDB has been getting a lot of attention and you can see the some of the list of production deployments from here - http://www.mongodb.org/display/DOCS/Production+Deployments NoRM – C# driver for MongoDB NoRM is a C# driver for MongoDB with LINQ support. NoRM project is available on Github at http://github.com/atheken/NoRM. Demo with ASP.NET MVC I will show a simple demo with MongoDB, NoRM and ASP.NET MVC. To work with MongoDB and  NoRM, do the following steps Download the MongoDB databse For Windows 32 bit, download from http://downloads.mongodb.org/win32/mongodb-win32-i386-1.4.1.zip  and for Windows 64 bit, download  from http://downloads.mongodb.org/win32/mongodb-win32-x86_64-1.4.1.zip . The zip contains the mongod.exe for run the server and mongo.exe for the client Download the NorM driver for MongoDB at http://github.com/atheken/NoRM Create a directory call C:\data\db. This is the default location of MongoDB database. You can override the behavior. Run C:\Mongo\bin\mongod.exe. This will start the MongoDb server Now I am going to demonstrate how to program with MongoDb and NoRM in an ASP.NET MVC application.Let’s write a domain class public class Category {            [MongoIdentifier]public ObjectId Id { get; set; } [Required(ErrorMessage = "Name Required")][StringLength(25, ErrorMessage = "Must be less than 25 characters")]public string Name { get; set;}public string Description { get; set; }}  ObjectId is a NoRM type that represents a MongoDB ObjectId. NoRM will automatically update the Id becasue it is decorated by the MongoIdentifier attribute. The next step is to create a mongosession class. This will do the all interactions to the MongoDB. internal class MongoSession<TEntity> : IDisposable{    private readonly MongoQueryProvider provider;     public MongoSession()    {        this.provider = new MongoQueryProvider("Expense");    }     public IQueryable<TEntity> Queryable    {        get { return new MongoQuery<TEntity>(this.provider); }    }     public MongoQueryProvider Provider    {        get { return this.provider; }    }     public void Add<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Insert(item);    }     public void Dispose()    {        this.provider.Server.Dispose();     }    public void Delete<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Delete(item);    }     public void Drop<T>()    {        this.provider.DB.DropCollection(typeof(T).Name);    }     public void Save<T>(T item) where T : class,new()    {        this.provider.DB.GetCollection<T>().Save(item);                }  }    The MongoSession constrcutor will create an instance of MongoQueryProvider that supports the LINQ expression and also create a database with name "Expense". If database is exists, it will use existing database, otherwise it will create a new databse with name  "Expense". The Save method can be used for both Insert and Update operations. If the object is new one, it will create a new record and otherwise it will update the document with given ObjectId.  Let’s create ASP.NET MVC controller actions for CRUD operations for the domain class Category public class CategoryController : Controller{ //Index - Get the category listpublic ActionResult Index(){    using (var session = new MongoSession<Category>())    {        var categories = session.Queryable.AsEnumerable<Category>();        return View(categories);    }} //edit a single category[HttpGet]public ActionResult Edit(ObjectId id) {     using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == id)              .FirstOrDefault();         return View("Save",category);    } }// GET: /Category/Create[HttpGet]public ActionResult Create(){    var category = new Category();    return View("Save", category);}//insert or update a category[HttpPost]public ActionResult Save(Category category){    if (!ModelState.IsValid)    {        return View("Save", category);    }    using (var session = new MongoSession<Category>())    {        session.Save(category);        return RedirectToAction("Index");    } }//Delete category[HttpPost]public ActionResult Delete(ObjectId Id){    using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == Id)              .FirstOrDefault();        session.Delete(category);        var categories = session.Queryable.AsEnumerable<Category>();        return PartialView("CategoryList", categories);    } }        }  You can easily work on MongoDB with NoRM and can use with ASP.NET MVC applications. I have created a repository on CodePlex at http://mongomvc.codeplex.com and you can download the source code of the ASP.NET MVC application from here

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • CodePlex Daily Summary for Friday, March 05, 2010

    CodePlex Daily Summary for Friday, March 05, 2010New Projects.svn Folders Cleanup Tool: dotSVN Cleanup is a tool that allows you to remove the .svn folders . Just click, browse, say abracadabra ...and the magic is done. Have fun with...Accord: The Accord framework creates an easy we to integrate any Dependency Injection framework into your project, while abstracting the details of your im...Asp.net MVC Lab: Try asp.net mvc outASP.NET Themes management with Webforms: The provided source is an example for how to use themes in ASP.NET Webforms. this source is the "up to date" support for the article I wroteB&W Port Scanner: B&W Port Scanner (formerly Net Inspector) is a fast TCP Port scan utility. The main idea is support of customizable operations to be performed f...BizTalk SWAT - Simple Web Activity Tracker: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. Some of th...C# Linear Hash Table: A C# dictionary-like implementation of a linear hash table. It is more memory efficiant than the .NET dictionary, and also almost as fast. NOTE: On...DBF Import Export Wizard: DBF Import Export Wizard is a tool for anyone needing to import DBF files into SQL Server or to export SQL Server tables to a DBF file. This proje...Domain as XML - Driven Development: Visual Studio Code Samples: Domain as XML - Driven Development: Visual Studio Code SamplesEasyDownload: This application allows to manage downloads handling an stack of files and several useful configurationsEos2: .FlightTickets: This application allows to buy flight ticketsFotofly PhotoViewer: A Silverlight control that uses the Fotofly metadata library to show the people in a photo (using Windows Live Photo Gallery People metadata) and a...Fujiy source code: Source code examplesGameSet: This application allows to play games with distributed users.Injectivity (Dependency Injection): Injectivity is a dependency injection framework (written in C#) with a strong focus on the ease of configuration and performance. Having been writt...Inventory: Keep track of inputs, materias and salesLoanTin.Com Source Code: LoanTin.Com - a Social Networking Website as same as Tumblr.com, based on source code of Loantiner Project, allow anyone can share anything to anyo...mysln: my solutions.NumTextBox: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 修改自:http://www.codeproject.com/KB/edit/num...Quick Performance Monitor: This small utility helps to monitor performance counters without using the full blown perfmon tool from Windows. It supports a number of command li...Runo: Runo ResearchSales: This application allows to manage a hardware storeScrewWiki Form Auth Provider: Enables your ASP.NET site to use Forms Authentication to integrate with your ScrewWiki. User management is performed on a parent site, and cookie i...SDS: Scientific DataSet library and tools: The SDS library makes it easy for .Net developers to read, write and share scalars, vectors, matrices and multidimensional grids which are very com...ShapeSweeper: Minesweeper-like game for the Zune HD. Each hidden object has three properties to discover--location, color, and shape--and all three must be corre...SilverlightExcel: an Excel file viewer in Silverlight 4: SilverlightExcel is a Silverlight application allowing you to open and view Excel files and also create graphs.sPWadmin: pwAdmin is an Web Interface based on JSP that uses the PW-Java API to control an PW-Server.Video Player control in Silverlight: A control for playing video in Silverlight 4 with chapters on timeline control. This player will be easily skinnable and customizable. More Featur...XNA Light Pre Pass Renderer: A demo/sample that shows how to write a light pre pass renderer in XNA.Zimms: Collaboration Site for friends, a code depot, and scratch padNew Releases.svn Folders Cleanup Tool: dotSVN Cleanup Tool: dotSVN Cleanup Tool executableAccord: Alpha: Initial build of the Accord framework.AcPrac: AcPrac Ver 0.1: The first version of AcPrac. It is not fully functional, but rather a version to get the bugs out. Please report all bugs.ASP.NET: ASP.NET Browser Definition Files: This download contains: ASP.NET 4 Browser Definition Files -- You can use the new ASP.NET 4 browser definition files with earlier versions of ASP....B&W Port Scanner: Black`n`White Port Scanner 1.0: B&W Port Scanner 1.0 Final Release Date: 03.03.2010 Black`n`WhiteBizTalk SWAT - Simple Web Activity Tracker: BizTalk SWAT: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. It uses th...BTP Tools: CSB+CUV+HCSB dict files 2010-03-04: 5. is now missing a space between the Strong’s number and the Count: >CSB Translation: 圣所 7, 至圣所G39+G394 it should be: CSB Translation: 圣所 7, 至圣所G...C# Linear Hash Table: Linear Hash Table: First working version of the Linear Hash Table.Cassiopeia: WinTools 1.0 beta: First ReleaseComposure: Caliburn-44007-trunk-vs2010.net40: This is a very simple conversion of the Caliburn trunk (rev 44007) for use in Visual Studio 2010 RC1 built against .NET40. Because the conversion w...Cover Creator: CoverCreator 1.3.0: English and Polish version. Functionality to add image to the front page. Load / save covers.DBF Import Export Wizard: DBF Import Export Wizard Source Code: Version 0.1.0.3DBF Import Export Wizard: DBF_Import_Export_Wizard Setup 0.1.0.3: Zip file contains Setup.exeESB Toolkit Extensions: Tellago BizTalk ESB 2.0 Toolkit Extensions v0.2: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Fotofly PhotoViewer: Fotofly Photoview v0.1: The first public release. Based on a Silverlight application I have been using for over a year at www.tassography.com. This version uses Fotofly v0...HPC with GPUs applied to CG: Cuda Soft Bodies simulation: Cuda src for soft bodiesHPC with GPUs applied to CG: Full Soft Bodies src: full src code for soft bodies simulationInjectivity (Dependency Injection): 2.8.166.2135: Release 2.8.166.2135 of the Injectivity dependency injection framework.Line Counter: 1.5 (Code Outline Preview): This version contains preview of the code outline feature, you can now view C# code outline within Line Counter. Note that the code outline now onl...Micajah Mindtouch Deki Wiki Copier: MicajahWikiCopier: You should use the following line arguments: WikiCopier.exe "http://oldwikiwithdata.wik.is/@api/deki" "login" "password" "http://newwiki.somename.l...ncontrols: Alpha 0.4.0.1: Added some example on the Console Project.NumTextBox: NumTextBox初始版本: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 此为初始版本PSCodeplex: PS CodePlex 0.2: PS CodePlex 0.2 has some breaking changes to the parameters. A few of the parameters are renamed and a few are made as switch parameters. Add-Rele...Quick Performance Monitor: QPerfMon First release - Version 1.0.0: The first release of the utility.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.2: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...ScrewWiki Form Auth Provider: ScrewWiki Forms Authentication: Initial ReleaseSee.Sharper: See.Sharper.Docs-1.10.3.4: HTML documentation (including Doxygen project)See.Sharper: See.Sharper-1.10.3.4: Solution (Source files, debug and release binaries)Solar.Generic: Solar.Generic 0.8.0.0 Beta (Revised, Renamed): Solar.Generic 0.8.0.0 (Revised & Renamed) Renamed project from Solar.Commons to Solar.Generic. Project solution file is now in format of Visual ...Solar.Security: Solar.Security 1.1.0.0: Performed several major refactorings of code base. Stripped In-Memory implementation of IConfiguration interface of transactional behavior due to...sPWadmin: pwAdmin v0.7: -Star System Simulator: Star System Simulator 2.3: Changes in this release: Fixed several localisation issues. Features in this release: Model star systems in 3D. Euler-Cromer method. Improved...SysI: sysi, stable and ready: This time for sure.TheWhiteAmbit: TheWhiteAmbit - Demo: Two little demos demonstrating: - fast realtime raytracing - generating bent normals for shading (CUDA capable GPU needed = nVidia GeForce >8x00)VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 22 Beta: Build 22 (beta) New: Visual Studio 2010 RC support (VsTortoise for Visual Studio 2010 RC screenshots) New: VsTortoise integrates in to Solution E...WinMergeFS: WinMergeFS 0.1.42128alpha: WinMergeFS provides AuFS functionality for windows. With WinMergeFS users can mount multiple directories into a virtual drive. Plugin based root se...WSDLGenerator: WSDLGenerator 0.0.0.2: - Bugs fixed - Code refactored - Added support for custom typesXNA Light Pre Pass Renderer: LightPrePassRendererXNA: Zipped source code for the light pre pass renderer made with XNA.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrBlogEngine.NETSDS: Scientific DataSet library and toolsMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterMDT Web FrontEndDiffPlex - a .NET Diff Generator

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >