Search Results

Search found 20095 results on 804 pages for 'office 2010 beta'.

Page 425/804 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Rails 3 memory issue

    - by Erik
    Hello! I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work. Now though I'm having HUGE problems with Rails consuming huge ammounts of memory. For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare. For my test app I just generated a model with a scaffold and then created about 20 records on this model. I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request. I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development. Does anyone recognize my problem? I would really appreciate some help here. :/ Thanks!

    Read the article

  • Unable to download .apk via webbrowser from drupal site

    - by ggrell
    I have a drupal-based website where people can log in and see private discussion forums. This is where I want to have my beta testers for my Android application download the beta .apk files. I tested this thoroughly on my Android 1.6 based myTouch 3G, and was able to log in, and download files attached to forum posts without problems. Now comes the interesting part: my testers on Droids and Nexus Ones (Android 2.0.1 and 2.1) were complaining that their downloads are failing. Since I don't have an 2.0 phone, I tried it out in a 2.0 emulator, and lo-and-behold, it didn't work. The download shows the indeterminate progress for a second or two, then shows "Download unsuccessful". Based on what I see in the logs, it is apparent that the server is returning a 404 for the download request from 2.0 browsers. I can download to my desktop and 1.6 phone no problem. The only reason I can think of that the server would return a 404 for a request is that for some reason the credentials or cookies aren't being passed by the download process. Logcat shows: http error 404 for download x Some background: I added the mime type to my .htaccess like this: AddType application/vnd.android.package-archive apk I checked the server logs and see the following for failed downloads: xx.xx.xx.224 - - [28/Jan/2010:20:39:00 -0500] "GET /system/files/grandmajong-beta090.apk HTTP/1.1" 404 - "http://trickybits.com/forums/beta-testing/grandma-jong/latest-version-090-b1" "Mozilla/5.0 (Linux; U; Android 1.6; en-us; sdk Build/Donut) AppleWebKit/528.5+ (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1"

    Read the article

  • Website often sticks, but clicking the link again everything loads fine

    - by Dave
    Hi I have a website, which normally is very fast, however in the last week or two we're run into a problem whereby randomly if you click a link the browser will just sit there with the throbber spinning but the page doesn't appear to load If you click the link again it then responds straight away This doesn't seem to be limited to Chrome, Firefox or IE as we've tried all with all having same problems The site is build in ASP and connects to a MySQL database and runs on a dedicated windows 2003 server The firewalls were changed in the data centre recently to pixies but I've not been able to reproduce the problem outside of the office Within the office, we have 11 people (only 7 today and still experiencing problems) connected to a 7MB ADSL connection with Eclipse I have made some changes to the network in the office, namely wiring 4 desks back to the main 24 port switch rather than to small 5 port which then linked on to the 24 port switch... however we were having problems before this was done, and I did this to try and rule out an issue with the switch We have a backup ADSL connection, so I may try switching to that, or switching to a different router, but do you think this is a likely issue? Failing that, what else would you check? Thanks, sorry for length! Dave

    Read the article

  • VS2010 express beta2 - no add reference dialog, no open file/project dialogs

    - by David
    Just installed VS2010 express for Windows Phone last night. Install went smoothly. It creates a project, compiles, and deploys the app to the emulator. Here's the problem: When I try to "Add Reference" through the Project menu, I do not get the Add Reference dialog box. Same thing if I right click References in the solution explorer and click Add Reference. That's not all. "File...Open" and "File...Open Project" also fail to throw up an open file dialog box. When attempting any of these actions, the IDE quickly loses and regains focus. Even pressing a keyboard shortcut (Ctrl+O) causes the IDE to quickly lose and regain focus, but no open file dialog box appears. This is what I have tried, not particularly in this order: 1. Turned off UAC 2. Monitored file and registry access using Process Monitor during a File...Open operation. File activity showed mostly "SUCCESS" with a few "FAST IO DISALLOWED" and a few "INVALID DEVICE REQUEST" results. Registry activity showed mostly "SUCCESS" with some "NAME NOT FOUND" and a few "BUFFER OVERFLOW" results. 3. Created a new, clean Windows account to run the IDE from 4. Forced a test project to add a reference to "System.Xml.Linq" by editing the ".csproj" project file. Project failed to load in the IDE. I don't have these problems at all on 2 other Windows 7 computers with VS2010 C# express beta 2 installed. One machine is 32bit and the other 64bit, both Home Premium edition. My system: Windows 7 Home Premium, 64bit Other Visual Studio products installed: VS2008 C# express, VS2008 C++ express One other thing to note: Several months ago I installed the non-phone distribution of VS2010 C# express beta 2, and I had the same exact problems. Back then I chalked it up to being beta and went back to VS2008 C# express, where I do not have these issues.

    Read the article

  • Get data on jquery ajax success

    - by jbatson
    anyone know how i would get from opening <table> to </table> in this data that is returned from ajax with jquery. // BEGIN Subsys_JsHttpRequest_Js Subsys_JsHttpRequest_Js.dataReady( '3599', // this ID is passed from JavaScript frontend '<table border=\"0\" width=\"100%\" cellspacing=\"0\" cellpadding=\"0\">\n <tr>\n <td>\n <script type=\"text/javascript\" language=\"javascript\" src=\"includes/ajax_sc.js\"></script>\n <div id=\"divShoppingCard\">\n\n <div class=\"infoBoxHeading\"><a href=\"shopping_cart.php\">Shopping Cart</a></div>\n\n <div>\n <div class=\"boxText\">\n<script language=\"javascript\" type=\"text/javascript\">\nfunction couponpopupWindow(url) {\n window.open(url,\"popupWindow\",\"toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=yes,resizable=yes,copyhistory=no,width=450,height=280,screenX=150,screenY=150,top=150,left=150\")\n}\n//--></script><table width=\"100%\" cellspacing=\"0\" cellpadding=\"0\"><tr><td align=\"left\" valign=\"top\" class=\"infoBoxContents\"><span class=\"infoBoxContents\">1&nbsp;x&nbsp;</span></td><td valign=\"top\" class=\"infoBoxContents\"><a href=\"http://beta.vikingwholesale.com/catalog/eagle-vanguard-limited-edition-p-3769.html\"><span class=\"infoBoxContents\">192 Eagle Vanguard Limited Edition</span></a></td></tr><tr><td align=\"left\" valign=\"top\" class=\"infoBoxContents\"><span class=\"newItemInCart\">4&nbsp;x&nbsp;</span></td><td valign=\"top\" class=\"infoBoxContents\"><a href=\"http://beta.vikingwholesale.com/catalog/family-traditions-adrenaline-avid-black-p-3599.html\"><span class=\"newItemInCart\">085 Family Traditions Adrenaline - Avid, Black</span></a></td></tr><tr><td align=\"left\" valign=\"top\" class=\"infoBoxContents\"><span class=\"infoBoxContents\">1&nbsp;x&nbsp;</span></td><td valign=\"top\" class=\"infoBoxContents\"><a href=\"http://beta.vikingwholesale.com/catalog/painted-pony-paradigm-p-4022.html\"><span class=\"infoBoxContents\">336 Painted Pony Paradigm</span></a></td></tr></table></div>\n\n\n <div class=\"boxText\"><img src=\"images/pixel_black.gif\" width=\"100%\" height=\"1\" alt=\"\"/></div>\n\n\n <div class=\"boxText\">$940.00</div>\n\n\n</div>\n\n\n \n </div><!--end of divShoppingCard-->\n </td>\n </tr></table>', null ) // END Subsys_JsHttpRequest_Js

    Read the article

  • Generated HTML word document not displaying image correctly

    - by spiderdijon
    I'm trying to add an image to a generated html word document that is embedded in a classic ASP page. The code looks something like this: <% Response.ContentType = "application/msword" %> <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word"> ... <v:shape id="_x0000_s1030" type="#_x0000_t75" style='position:absolute; left:0;text-align:left;margin-left:0;margin-top:17.95pt;width:7in;height:116.85pt; z-index:2;mso-position-horizontal:center;mso-position-horizontal-relative:page; mso-position-vertical-relative:page'> <v:imagedata src="http://xxx/image001.gif" o:title="image001"/> <w:wrap anchorx="page" anchory="page"/> <w:anchorlock/> </v:shape><![endif]--><![if !vml]><span style='mso-ignore:vglayout;position: absolute;z-index:0;left:0px;margin-left:0px;margin-top:24px;width:672px; height:156px'><img width=672 height=156 src="http://xxx/image001.gif" v:shapes="_x0000_s1030"></span><![endif]> The image URL is correct and can be viewed through a browser, however when the word document opens, the image has a red x, with the error message: The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may be corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again. If i copy the html code and try to open the word document on my local machine, it displays the image correctly. It just doesn't work when retrieving the document from the server. This happens for any images I try to add. Is there another way to add images to html-generated word documents that can be output from an asp page? Thanks.

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • Convert Excel Range to ADO.NET DataSet or DataTable, etc.

    - by htbrady
    I have an Excel spreadsheet that will sit out on a network share drive. It needs to be accessed by my Winforms C# 3.0 application (many users could be using the app and hitting this spreadsheet at the same time). There is a lot of data on one worksheet. This data is broken out into areas that I have named as ranges. I need to be able to access these ranges individually, return each range as a dataset, and then bind it to a grid. I have found examples that use OLE and have got these to work. However, I have seen some warnings about using this method, plus at work we have been using Microsoft.Office.Interop.Excel as the standard thus far. I don't really want to stray from this unless I have to. Our users will be using Office 2003 on up as far as I know. I can get the range I need with the following code: MyDataRange = (Microsoft.Office.Interop.Excel.Range) MyWorkSheet.get_Range("MyExcelRange", Type.Missing); The OLE way was nice as it would take my first row and turn those into columns. My ranges (12 total) are for the most part different from each other in number of columns. Didn't know if this info would affect any recommendations. Is there any way to use Interop and get the returned range back into a dataset?

    Read the article

  • Ribbon not showing up with dynamically generated ribbon XML from the database

    - by ydobonmai
    I am trying to form ribbon XML with the data from the database and following is what I wrote:- XNamespace xNameSpace = "http://schemas.microsoft.com/office/2006/01/customui"; XDocument document = new XDocument(); document.Add( new XElement (xNameSpace+"customUI" , new XElement("ribbon" , new XElement("tabs")))); // more code to add the groups and the controls with-in the groups ....... // code below to add ribbon XML to the document and to add the relationship RibbonExtensibilityPart ribbonExtensibilityPart = myDoc.AddNewPart<RibbonExtensibilityPart>(); ribbonExtensibilityPart.CustomUI = new DocumentFormat.OpenXml.Office.CustomUI.CustomUI(ribbonXml.ToString()); myDoc.CreateRelationshipToPart(ribbonExtensibilityPart); I don't see any error executing the above. However, when I open the changed document, I dont see my ribbon added. I see following in the CustomUI/CustomUI.xml inside the word:- <?xml version="1.0" encoding="utf-8"?><customUI xmlns="http://schemas.microsoft.com/office/2006/01/customui"> <ribbon xmlns=""> <tabs> ..... I am not sure how the "xmlns" attribute is getting added to the ribbon element. When I remove that attribute, the ribbon gets showed. Could anybody throw any idea on where am I going wrong?

    Read the article

  • C# XML export for Excel table using XmlDocument

    - by Mark
    I am trying to write the following into an XML document: <head> <xml> <x:ExcelWorkbook> <x:ExcelWorksheets> <x:ExcelWorksheet> other code here </x:ExcelWorksheet> </x:ExcelWorksheets> </x:ExcelWorkbook> </xml> </head> However, if I use the following code, it strips out the 'x:'. System.Xml.XmlDocument document = new System.Xml.XmlDocument(); System.Xml.XmlElement htmlNode = document.CreateElement("html"); htmlNode.SetAttribute("xmlns:o", "urn:schemas-microsoft-com:office:office"); htmlNode.SetAttribute("xmlns:x", "urn:schemas-microsoft-com:office:excel"); htmlNode.SetAttribute("xmlns", "http://www.w3.org/TR/REC-html40"); document.AppendChild(htmlNode); System.Xml.XmlElement headNode = document.CreateElement("head"); htmlNode.AppendChild(headNode); headNode.AppendChild( document.CreateElement("xml")).AppendChild( document.CreateElement("x:ExcelWorkbook"))).AppendChild( document.CreateElement("x:ExcelWorksheets")).AppendChild( document.CreateElement("x:ExcelWorksheet")).InnerText="other code here"; How can I stop this from happening? Thanks!

    Read the article

  • R plotting multiple histograms on single plot to .pdf as a part of R batch script

    - by Bryce Thomas
    I am writing R scripts which play just a small role in a chain of commands I am executing from a terminal. Basically, I do much of my data manipulation in a Python script and then pipe the output to my R script for plotting. So, from the terminal I execute commands which look something like $python whatever.py | R CMD BATCH do_some_plotting.R. This workflow has been working well for me so far, though I have now reached a point where I want to overlay multiple histograms on the same plot, inspired by this answer to another user's question on Stackoverflow. Inside my R script, my plotting code looks like this: pdf("my_output.pdf") plot(hist(d$original,breaks="FD",prob=TRUE), col=rgb(0,0,1,1/4),xlim=c(0,4000),main="original - This plot is in beta") plot(hist(d$minus_thirty_minutes,breaks="FD",prob=TRUE), col=rgb(1,0,0,1/4),add=T,xlim=c(0,4000),main="minus_thirty_minutes - This plot is in beta") Notably, I am using add=T, which is presumably meant to specify that the second plot should be overlaid on top of the first. When my script has finished, the result I am getting is not two histograms overlaid on top of each other, but rather a 3-page PDF whose 3 individual plots contain the titles: i) Histogram of d$original ii) original - This plot is in beta iii) Histogram of d$minus_thirty_minutes So there's two points here I'm looking to clarify. Firstly, even if the plots weren't overlaid, I would expect just a 2-page PDF, not a 3-page PDF. Can someone explain why I am getting a 3-page PDF? Secondly, is there a correction I can make here somewhere to get just the two histograms plotted, and both of them on the same plot (i.e. 1-page PDF)? The other Stackoverflow question/answer I linked to in the first paragraph did mention that alpha-blending isn't supported on all devices, and so I'm curious whether this has anything to do with it. Either way, it would be good to know if there is a R-based solution to my problem or whether I'm going to have to pipe my data into a different language/plotting engine.

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • How to solve linker error in matrix multiplication in c using lapack library?

    - by Malar
    I did Matrix multiplication using lapack library, I am getting an error like below. Can any one help me? "error LNK2019: unresolved external symbol "void __cdecl dgemm(char,char,int *,int *,int *,double *,double *,int *,double *,int *,double *,double *,int *)" (?dgemm@@YAXDDPAH00PAN1010110@Z) referenced in function _main" 1..\bin\matrixMultiplicationUsingLapack.exe : fatal error LNK1120: 1 unresolved externals I post my code below # define matARowSize 2 // -- Matrix 1 number of rows # define matAColSize 2 // -- Matrix 1 number of cols # define matBRowSize 2 // -- Matrix 2 number of rows # define matBColSize 2 // -- Matrix 2 number of cols using namespace std; void dgemm(char, char, int *, int *, int *, double *, double *, int *, double *, int *, double *, double *, int *); int main() { double iMatrixA[matARowSize*matAColSize]; // -- Input matrix 1 {m x n} double iMatrixB[matBRowSize*matBColSize]; // -- Input matrix 2 {n x k} double iMatrixC[matARowSize*matBColSize]; // -- Output matrix {m x n * n x k = m x k} double alpha = 1.0f; double beta = 0.0f; int n = 2; iMatrixA[0] = 1; iMatrixA[1] = 1; iMatrixA[2] = 1; iMatrixA[3] = 1; iMatrixB[0] = 1; iMatrixB[1] = 1; iMatrixB[2] = 1; iMatrixB[3] = 1; //dgemm('N','N',&n,&n,&n,&alpha,iMatrixA,&n,iMatrixB,&n,&beta,iMatrixC,&n); dgemm('N','N',&n,&n,&n,&alpha,iMatrixA,&n,iMatrixB,&n,&beta,iMatrixC,&n); std::cin.get(); return 0; }

    Read the article

  • How to do a partial database backup and restore?

    - by Workshop Alex
    Simple problem. I'm working on a single SQL Server database which is shared between several offices. Each office has their own schema inside this database, thus dividing the database in logical pieces. (Plus one schema that is shared between multiple offices.) The database is stored on a dedicated server and we use a single database to keep the backup/restore procedure easier. The problem, however, is that the Accounting Office might be modifying a lot of data and then the Secretary Office makes a mistake which requires restoration of a backup. Unfortunately, restoring the backup means that Accounting will lose their recently added data. So, the alternative solution is by restoring the backup into a new database, remove the data from the old accounting schema and move the data for accounting only from the backup top the original database. This is the current solution and it's time-consuming and error-prone. So, is there a way to make backups of a single schema, possibly through code? And then to restore just that schema, probably through code too?

    Read the article

  • How to conduct an interview for a development position remotely?

    - by sharptooth
    Usually we run interviews in office. We have a room with a table, the interviewee and one or two interviewers sit at the table, interviewers ask questions, often accompanied with code snippets on paper, the interviewee (hopefully) answers them, writes code snippets to illustrate his point. Usually it's something like an interviewer writes about five lines of C++ code and asks some specific question - quite a little code. Now we need to do the same remotely. We will be in our office and the interviewee will be far away - we are asked to help hire a person for another office located abroad. Of course we can use some technology for voice calls, but I'm afraid it's the most we can count on. I see a whole set of obstacles here: how to write illustration code snippets and exchange them efficiently? what to do to compensate for the fact that we're not native English speakers and the interviewees might or might be not native English speakers (I'm afraid this can make conversation significantly harder)? Are there any best practices for this situation? How could we address the obstacles listed? What other things should we consider to run the interview most efficiently?

    Read the article

  • Is there any well-known paradigm for iterating enum values?

    - by SadSido
    I have some C++ code, in which the following enum is declared: enum Some { Some_Alpha = 0, Some_Beta, Some_Gamma, Some_Total }; int array[Some_Total]; The values of Alpha, Beta and Gamma are sequential, and I gladly use the following cycle to iterate through them: for ( int someNo = (int)Some_Alpha; someNo < (int)Some_Total; ++someNo ) {} This cycle is ok, until I decide to change the order of the declarations in the enum, say, making Beta the first value and Alpha - the second one. That invalidates the cycle header, because now I have to iterate from Beta to Total. So, what are the best practices of iterating through enum? I want to iterate through all the values without changing the cycle headers every time. I can think of one solution: enum Some { Some_Start = -1, Some_Alpha, ... Some_Total }; int array[Some_Total]; and iterate from (Start + 1) to Total, but it seems ugly and I have never seen someone doing it in the code. Is there any well-known paradigm for iterating through the enum, or I just have to fix the order of the enum values? (let's pretend, I really have some awesome reasons for changing the order of the enum values)...

    Read the article

  • Newbie jQuery question: Need slideshow to rotate automatically, not just when clicking navigation.

    - by Justin
    Hi everyone, This is my first post, so please forgive me if this question has been asked a million times. I'm a self professed jQuery hack and I need a little guidance on taking this script I found and adapting it to my needs. Anyway, what I'm making is an image slide show with navigation. The script I found does this, but does not automatically cycle through the images. I'm using jQuery 1.3.2 and would rather stick with that than using the newer library. I would also prefer to edit what is already here rather than start from scratch. Anywho, here's the html: <div id="myslide"> <div class="cover"> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/current_Denver-skyline.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/pepsi_center-IS42RF-0D111C.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/columbine-2689820469_D1104.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/ist2_10460354-RedRocks.jpg" /> </div> </div> <!-- end of div cover --> </div> <!-- end of div myslide --> And here's the jQuery: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script type="text/JavaScript"> $(document).ready(function (){ $('#button a').click(function(){ var integer = $(this).attr('rel'); $('#myslide .cover').css({left:-820*(parseInt(integer)-1)}).hide().fadeIn(); /*----- Width of div #mystuff (here 820) ------ */ $('#button a').each(function(){ $(this).removeClass('active'); if($(this).hasClass('button'+integer)){ $(this).addClass('active')} }); }); }); </script> Here's where I got the script: http://www.webdeveloperjuice.com/2010/04/07/create-lightweight-jquery-fade-manual-slideshow/ Again, if this question is too basic for this site please let me know and possibly provide a reference link or two. Thanks a ton!

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • The Beginner’s Guide to Greasemonkey User Scripts in Firefox

    - by Asian Angel
    Everybody knows that Firefox has add-ons for virtually everything, but if you don’t want to bloat your installation you’ve always got the option of Greasemonkey scripts instead. Here’s a quick primer on how to use them. Getting Started with User Scripts Once you have Greasemonkey installed, managing the extension is really easy. Left click on the status bar icon to turn the extension on/off and right click to access the context menu shown here. Whether you use the Options button in the Add-ons Manager Window or the context menu shown above, both will bring up the Manage User Scripts dialog. At the moment you have a nice clean slate to work with… time to get some scripts added in. The majority of user scripts can be found at two different sites, the first being appropriately named userscripts.org, and you can either browse by tag or search for a script. As you can see here your search for a particular type of script can be quickly narrowed down based on category. There is definitely a lot to choose from. For our example we focused on the “textarea” tag. There were 62 scripts available but we quickly found what we were looking for on the first page. Installing, Managing, & Using Your Scripts When you find a script that you want to install visit the script’s homepage and click on the “Install” button. Note: Link for this script provided below. Once you have clicked on the Install button, Greasemonkey will open up the following installation window. You will be able to view: A summary of what the script does A list of websites that the script is supposed to function on (our example is set for all) View the script source if desired Make a final decision on whether to install the script or cancel the process Right-clicking on our status bar icon shows our new script listed and active. Reopening the Manage User Scripts window shows: Our new script listed in the column on the left The websites/pages included An option to disable the script (can also be done in the context menu) The ability to edit the script The ability to uninstall the script If you choose to edit the script you will be asked to browse for and select a default text editor of your choice (first time only). Once you have selected a text editor you can make any changes desired to the script. We decided to test our new user script on the site. Going to the comment box at the bottom we could easily resize the window as desired. The Comment box definitely got a lot bigger. Conclusion If you prefer to keep the number of extensions to a minimum in your Firefox installation then Greasemonkey and the Userscripts website can easily provide that extra functionality without the bloat. For added auto website script detection goodness see our article on Greasefire. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to Greasemonkey. Links Download the Greasemonkey Extension (Mozilla Add-ons) Install the Textarea & Input Resize User Script Visit the Userscripts.org Website Visit the Userstyles.org Website Similar Articles Productive Geek Tips Enjoy How-To Geek User Style Script GoodnessEnable Multi-Column Google Searches with a User ScriptSearch Alternative Search Engines from within Bing’s Search PageFind User Scripts for Your Favorite Websites the Easy WaySet Up User Scripts in Opera Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • If You Could Cut Your Meeting Times in ½ Would You?

    - by [email protected]
    By Brian Dayton on April 22, 2010 2:02 PM I know it sounds like a big promise. And what I'm thinking about may not cut a :60 minute meeting into :30 minutes, but it could make meetings and interactions up to 2X more productive. How? Social Media for the Enterprise, Not Social Media In the Enterprise Bear with me. I'm not talking about whether or not workers should or shouldn't have access to Facebook on corporate networks. That topic has been discussed @ length. I'm also not talking about the direct benefits of Social Networking tools like Presence (the ability to see someone online and ask a question in real-time), blogs, RSS feeds or external tools like Twitter. The Un-Measurable Benefits Would you do something that you believe will have a positive effect--but can't be measured? It's impossible to quantify the effectiveness of a meeting. However, what I am talking about would be more of a byproduct of all of the social networking tools above. Here's the hypothesis: As I've gotten more and more busy with work, family, travel and kids--and the same has happened to my friends and family--I'm less and less connected. But by introducing Facebook to my life I've not only made connections with longtime friends whom I haven't spoken to in years--but I've increased the pace and quality of interactions, on and offline, with close friends who I see and speak to every week. In some cases it even enhances the connections and interactions with those I see or speak to every day. The same holds true in an organization. Especially a larger one with highly matrixed organizational structures. You work with people on a project, new people come in with each different project and a disproportionate amount of time is spent getting oriented and staying current. Going back to the initial value proposition--making meetings shorter/more effective--a large amount of time is spent: - At Project Kick-off: Meeting and understanding team member's histories, goals & roles - Ongoing: Summarizing events since the last meeting or update email In my personal, Facebook life today I know that: - My best friend from college - has been stranded in India for 5 days because of the volcano in Iceland and is now only 250 miles from home - One of my co-workers started conference calls at 6:30 this morning - My wife wasn't terribly pleased with my painting skills in our new bathroom (disclosure: she told me this face to face too) Strengthening Weak Links A recent article in CIO Magazine, Three Dangerous Social Media Misconceptions (Kristen Burnham, March 12, 2010) calls out the #1 misconception as follows: 1. "Face-to-face relationships are far more valuable than virtual ones." While some level of physical interaction will always add value to relationships, Gartner says that come 2020, most relationships and teams will be based on "weak links"--that is, you may not have personally met a contact, but you'll know of or may have interacted with him via social sites like Facebook, LinkedIn and Twitter. The sooner your enterprise adopts these tools, the sooner your employees will learn them, and the sooner you'll begin to cultivate these relationships-of-the-future. I personally believe that it's not an either/or choice between face-to-face and virtual interactions. In fact, I'll be as bold as saying it doesn't matter. I can point to two extremely valuable work relationships that I've had over the past 5 years: - I shared an office with one of them - I met the other person, face-to-face, only once Both relationships were very productive. The dynamics were similar. The communication tactics differed immensely. What does matter is the quality, frequency and relevance of interactions. Still sound like too much? An over-promise? Stay tuned for my next post The Gap Between Facebook and LinkedIn. I'll also connect some of the dots with where Oracle Applications and technologies are headed.

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • HTML Presence Controls for Communications Server 14 CodePlex Project

    Showing Presence on the Web If youre running Office Communicator Server 2007 R2, you know that your only out-of-the-box option for showing presence on the web is to use the NameControl ActiveX control that ships as part of Office.  Being an ActiveX control, this obviously means that youre limited to Internet Explorer.  Also, nobody likes ActiveX controls What if you want to show the presence of users in a pure ASP.NET or HTML application and cant assume that the user has Communicator installed you need anASP.NET or HTML presence control.  HTML Presence Controls for Microsoft Communications Server 14 We recently worked with the UC team at Microsoft on a keynote demo for TechEd 2010 in New Orleans.  The demo was for a fictitious airline Fabrikam Airlines that wanted to show the presence of customer service and reservations agents on its website.  Customers could also start an instant message conversation with the agents using a Silverlight web chat window that used WCF to communicate with the backend UCMA application. We built HTML Presence Controls that use AJAX to poll a REST-based WCF service running in IIS and hosting a UCMA 3.0 presence subscription application.   Microsoft has graciously allowed us to publish these on CodePlex so that the development community can benefit from them:  http://htmlpresencecontrols.codeplex.com/ We will be maintaining the CodePlex project as new builds of UCMA 3.0 become available.  Check out the project home page on CodePlex for some more in-depth details on how the controls are implemented. ASP.NET Server Control Implementation Were providing an ASP.NET Server Control implementation that you can use stand-alone or in a GridView or Repeater (or other layout control).  The control has properties that allow you to control its appearance, e.g. you can choose whether or not to show the contacts name or availability text. You can also use the server control in a layout control such as a GridView by putting it in a TemplateColumn and binding to the Sip Uri in the data source. Disclaimer Once we started working on these, we realized why Microsoft hasnt shipped such controls as part of the product.  There are some tradeoffs you have to be aware of when using these controls, heres the high level. Privacy The backend UCMA 3.0 application that subscribes to presence of contacts runs as a trusted application and can thus retrieve the presence of any user in the organization.  Theres currently no good way in UCMA to apply any privacy rules to ensure that the consumer of the presence controls has permission to see the presence of the contacts that the controls are bound to.  Just to be absolutely crystal clear These controls provide a way to query the presence of any user in the organization, regardless of the privacy relationship between the person consuming the controls and the contacts whose presence is being displayed. Were exploring options for a design pattern that would allow you to inject some privacy controls.  Keep in mind though that you would most likely be responsible for implementing this logic, as there is currently no functionality in UCMA that allows you to do that. Polling the WCF REST Service The controls poll the backend WCF service to retrieve the presence of contacts - you can control the refresh interval so that they poll less often. We implemented a caching layer so that the WCF service is always communicating with a presence cache it never communicates directly with Communications Server.  For example, if your web page is showing the presence of sip:[email protected] and 500 people have the page open, the presence cache only contains one instance of the subscription Communications Server is not being polled 500 times for the presence of that contact. Once the presence of a contact changes, it is updated in the cache.  There are some server-based push mechanisms that would work nicely here, such as the one that Outlook Web Access 2010 uses.  Unfortunately we didnt have time to explore these options. Community Contribution Take a look at the project Issue Tracker, there are a couple of things we can use some help with.  Shoot me a note if youre interested in contributing to the project. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >