Search Results

Search found 3683 results on 148 pages for 'lucky luck'.

Page 146/148 | < Previous Page | 142 143 144 145 146 147 148  | Next Page >

  • Firefox and TinyMCE 3.4.9 won't play nice together

    - by Patricia
    I've submitted a bug to the tinyMCE people, but i'm hoping someone else has come across this and has a suitable workaround. Here's the situation: I've got a form with a tinyMCE control that i load into a jquery dialog. it works great the first time, then after they close it, and open a new one. any interaction with the tinyMCE control gives: "Node cannot be used in a document other than the one in which it was created" it also doesn't fill the control with the text it's supposed to be prepopulated with. In my jquery dialogs i have the following scripts in my beforeClose handler: if (typeof tinyMCE != 'undefined') { $(this).find(':tinymce').each(function () { var theMce = $(this); tinyMCE.execCommand('mceFocus', false, theMce.attr('id')); tinyMCE.execCommand('mceRemoveControl', false, theMce.attr('id')); $(this).remove(); }); } and here's my tinyMCE setup script: $('#' + controlId).tinymce({ script_url: v2ScriptPaths.TinyMCEPath, mode: "none", elements: controlId, theme: "advanced", plugins: "paste", paste_retain_style_properties: "*", theme_advanced_toolbar_location: "top", theme_advanced_buttons1: "bold, italic, underline, strikethrough, separator, justifyleft, justifycenter, justifyright, justifyfull, indent, outdent, separator, undo, redo, separator, numlist, bullist, hr, link, unlink,removeformat", theme_advanced_buttons2: "fontsizeselect, forecolor, backcolor, charmap, pastetext,pasteword,selectall, sub, sup", theme_advanced_buttons3: "", language: v2LocalizedSettings.languageCode, gecko_spellcheck : true, onchange_callback: function (editor) { tinyMCE.triggerSave(); }, setup: function (editor) { editor.onInit.add(function (editor, event) { if (typeof v2SetInitialContent == 'function') { v2SetInitialContent(); } }) } }); is there anything obvious here? i've got all that complicated removal stuff in the close b/c tinymce doesn't like having it's html removed without it being removed. the setInitialContent() stuff is just so i can pre-load it with their email signature if they have one. it's broken with or without that code so that's not the problem. i was forced to update to 3.4.9 because of this issue: http://www.tinymce.com/forum/viewtopic.php?id=28400 so if someone can solve that, it would help this situation to. I have tried the suggestion from aknosis with no luck. EDIT: I originally thought this only affected firefox 11, but i have downloaded 10, and it is also affected. EDIT: Ok, i've trimmed down all my convoluted dynamic forms and links that cause this into a fairly simple example. the code for the base page: <a href="<%=Url.Action(MVC.Temp.GetRTEDialogContent)%>" id="TestSendEmailDialog">Get Dialog</a> <div id="TestDialog"></div> <script type="text/javascript"> $('#TestSendEmailDialog').click(function (e) { e.preventDefault(); var theDialog = buildADialog('Test', 500, 800, 'TestDialog'); theDialog.dialog('open'); theDialog.empty().load($(this).attr('href'), function () { }); }); function buildADialog(title, height, width, dialogId, maxHeight) { var customDialog = $('#' + dialogId); customDialog.dialog('destory'); customDialog.dialog({ title: title, autoOpen: false, height: height, modal: true, width: width, close: function (ev, ui) { $(this).dialog("destroy"); $(this).empty(); }, beforeClose: function (event, ui) { if (typeof tinyMCE != 'undefined') { $(this).find(':tinymce').each(function () { var theMce = $(this); tinyMCE.execCommand('mceFocus', false, theMce.attr('id')); tinyMCE.execCommand('mceRemoveControl', false, theMce.attr('id')); $(this).remove(); }); } } }); return customDialog; } </script> and the code for the page that is loaded into the dialog: <form id="SendAnEmailForm" name="SendAnEmailForm" enctype="multipart/form-data" method="post"> <textarea cols="50" id="MessageBody" name="MessageBody" rows="10" style="width: 710px; height:200px;"></textarea> </form> <script type="text/javascript"> $(function () { $('#MessageBody').tinymce({ script_url: v2ScriptPaths.TinyMCEPath, mode: "exact", elements: 'MessageBody', theme: "advanced", plugins: "paste, preview", paste_retain_style_properties: "*", theme_advanced_toolbar_location: "top", theme_advanced_buttons1: "bold, italic, underline, strikethrough, separator, justifyleft, justifycenter, justifyright, justifyfull, indent, outdent, separator, undo, redo, separator, numlist, bullist, hr, link, unlink,removeformat", theme_advanced_buttons2: "fontsizeselect, forecolor, backcolor, charmap, pastetext,pasteword,selectall, sub, sup, preview", theme_advanced_buttons3: "", language: 'EN', gecko_spellcheck: true, onchange_callback: function (editor) { tinyMCE.triggerSave(); } }); }); </script> a very interesting thing i noticed, if i move the tinyMCE setup into the load callback, it seems to work. this isn't really a solution though, because when i'm loading dialogs, i don't know in the code ahead of time if it has tinyMCE controls or not!

    Read the article

  • jsf flash.keep and redirects does not seem to work

    - by user384706
    Hi, I am trying to use JSF 2.0 flash via redirects. I have a class named UserBean (ManagedScoped and RequestScoped). It has a method called getValuesFromFlash which essentially gets the values from the flash and sets the coresponding properties of the UserBean public void getValuesFromFlash() { Flash flash = FacesContext.getCurrentInstance().getExternalContext().getFlash(); firstName= (String) flash.get("firstName"); lastName = (String) flash.get("lastName"); } In the entry page the <h:inputText> are bound to the flash. Snippet follows: Entry.xhtml <h:form> <table> <tr> <td>First Name:</td> <td> <h:inputText id="fname" value="#{flash.firstName}" required="true"/> <h:message for="fname" /> </td> </tr> <tr> <td>Last Name:</td> <td> <h:inputText id="lname" value="#{flash.lastName}" required="true"/> <h:message for="lname" /> </td> </tr> </table <p><h:commandButton value="Submit" action="confirmation?faces-redirect=true" /></p> </h:form> In confirmation.xhtml these values are supposed to be displayed but they are not. Snippet follows: <table> <tr> <td>First Name:</td> <td> <h:outputText value="#{flash.keep.firstName}" /> </td> </tr> <tr> <td>Last Name:</td> <td> <h:outputText value="#{flash.keep.lastName}"/> </td> </tr> </table> <h:form> <p><h:commandButton value="Confirm" action="finished?faces-redirect=true" /></p> </h:form> In finished.xhtml I want all these to be displayed so: <h:body> <h:form> #{userBean.getValuesFromFlash( )} <table> <tr> <td>First Name:</td> <td> <h:outputText value="#{userBean.firstName}" /> </td> </tr> <tr> <td>Last Name:</td> <td> <h:outputText value="#{userBean.lastName}"/> </td> </tr> </table> </h:form> </h:body> Instead when I get redirected to confirmation.xhtml neither first nor last name is displayed, despite I used flash.keep Also in finished.xhtml I get org.apache.el.parser.ParseException: Encountered " "(" "( "" at line 1, column 30. Was expecting one of: "}" ... "." If I remove the parenthesis from the expression #{userBean.getValuesFromFlash( )} and make it = #{userBean.getValuesFromFlash} javax.el.ELException: /done.xhtml: Property 'getValuesFromFlash' not found on type com.UserBean But why no property??It is a method! Why is the method not visible? What I am doing wrong here please? I am using JSF2.0 (MyFaces), Eclipse Helios and Tomcat 6 (Have also tried in Tomcat 7 but no luck). Any help is appreciated. Thanks!

    Read the article

  • Is there an equivalent to Java's ClassFileTransformer in .NET? (a way to replace a class)

    - by Alix
    I've been searching for this for quite a while with no luck so far. Is there an equivalent to Java's ClassFileTransformer in .NET? Basically, I want to create a class CustomClassFileTransformer (which in Java would implement the interface ClassFileTransformer) that gets called whenever a class is loaded, and is allowed to tweak it and replace it with the tweaked version. I know there are frameworks that do similar things, but I was looking for something more straightforward, like implementing my own ClassFileTransformer. Is it possible? EDIT #1. More details about why I need this: Basically, I have a C# application and I need to monitor the instructions it wants to run in order to detect read or write operations to fields (operations Ldfld and Stfld) and insert some instructions before the read/write takes place. I know how to do this (except for the part where I need to be invoked to replace the class): for every method whose code I want to monitor, I must: Get the method's MethodBody using MethodBase.GetMethodBody() Transform it to byte array with MethodBody.GetILAsByteArray(). The byte[] it returns contains the bytecode. Analyse the bytecode as explained here, possibly inserting new instructions or deleting/modifying existing ones by changing the contents of the array. Create a new method and use the new bytecode to create its body, with MethodBuilder.CreateMethodBody(byte[] il, int count), where il is the array with the bytecode. I put all these tweaked methods in a new class and use the new class to replace the one that was originally going to be loaded. An alternative to replacing classes would be somehow getting notified whenever a method is invoked. Then I'd replace the call to that method with a call to my own tweaked method, which I would tweak only the first time is invoked and then I'd put it in a dictionary for future uses, to reduce overhead (for future calls I'll just look up the method and invoke it; I won't need to analyse the bytecode again). I'm currently investigating ways to do this and LinFu looks pretty interesting, but if there was something like a ClassFileTransformer it would be much simpler: I just rewrite the class, replace it, and let the code run without monitoring anything. An additional note: the classes may be sealed. I want to be able to replace any kind of class, I cannot impose restrictions on their attributes. EDIT #2. Why I need to do this at runtime. I need to monitor everything that is going on so that I can detect every access to data. This applies to the code of library classes as well. However, I cannot know in advance which classes are going to be used, and even if I knew every possible class that may get loaded it would be a huge performance hit to tweak all of them instead of waiting to see whether they actually get invoked or not. POSSIBLE (BUT PRETTY HARDCORE) SOLUTION. In case anyone is interested (and I see the question has been faved, so I guess someone is), this is what I'm looking at right now. Basically I'd have to implement the profiling API and I'll register for the events that I'm interested in, in my case whenever a JIT compilation starts. An extract of the blogpost: In your ICorProfilerCallback2::ModuleLoadFinished callback, you call ICorProfilerInfo2::GetModuleMetadata to get a pointer to a metadata interface on that module. QI for the metadata interface you want. Search MSDN for "IMetaDataImport", and grope through the table of contents to find topics on the metadata interfaces. Once you're in metadata-land, you have access to all the types in the module, including their fields and function prototypes. You may need to parse metadata signatures and this signature parser may be of use to you. In your ICorProfilerCallback2::JITCompilationStarted callback, you may use ICorProfilerInfo2::GetILFunctionBody to inspect the original IL, and ICorProfilerInfo2::GetILFunctionBodyAllocator and then ICorProfilerInfo2::SetILFunctionBody to replace that IL with your own. The great news: I get notified when a JIT compilation starts and I can replace the bytecode right there, without having to worry about replacing the class, etc. The not-so-great news: you cannot invoke managed code from the API's callback methods, which makes sense but means I'm on my own parsing the IL code, etc, as opposed to be able to use Cecil, which would've been a breeze. I don't think there's a simpler way to do this without using AOP frameworks (such as PostSharp). If anyone has any other idea please let me know. I'm not marking the question as answered yet.

    Read the article

  • IE7 li ul bug on dropdown menu

    - by Berns
    hoping one of you guys can help me please. I have a basic list menu with two dropdowns. This all works fine on all browsers except IE6 and IE7. Please take a look at my markup. <nav> <ul id="topNav" ><li id="topNavFirst"><a href="../about/about.php" id="aboutNav">About Us</a></li ><li id="topNavSecond"><a href="../people/our-people.php" id="peopleNav">Our People</a ><ul id="subList1"><li><a href="../people/mike-hadfield.php">Mike Hadfield</a></li ><li><a href="../people/karen-sampson.php">Karen Sampson</a></li ><li><a href="../people/milhana-farook.php">Milhana Farook</a></li ><li><a href="../people/kim-crook.php">Kim Crook</a></li ><li><a href="../people/amanda-lynch.php">Amanda Lynch</a></li ><li><a href="../people/gideon-scott.php">Gideon Scott</a></li ><li><a href="../people/paul-fuller.php">Paul Fuller</a></li ><li><a href="../people/peter-chaplain.php">Peter Chaplain</a></li ><li><a href="../people/laura-hutley.php">Laura Hutley</a></li ></ul ></li ><li id="topNavThird"><a href="../services/our-services.php" id="servicesNav">Our Services</a ><ul id="subList2"><li><a href="../services/company-and-commercial.php">Company &amp; Commercial</a></li ><li><a href="../services/employment.php">Employment</a></li ><li><a href="../services/civil-litigation.php">Civil Litigation</a></li ><li><a href="../services/debt-recovery.php">Debt Recovery</a></li ><li><a href="../services/conveyancing.php">Conveyancing</a></li ><li><a href="../services/commercial-property.php">Commerical Property</a></li ><li><a href="../services/wills-and-probate.php">Wills &amp; Probate</a></li ><li><a href="../services/family.php">Matrimonial &amp; Family</a></li ></ul ></li ><li><a href="../news/news.php" id="newsNav">News</a></li ><li><a href="../careers/careers.php" id="careersNav">Careers</a></li ><li><a href="../contact/contact.php" id="contactNav">Contact</a></li ></ul><!-- /topNav --> </nav>? and the css a {text-decoration:none;} #topNav { float:right; height:30px; margin:0; font-size:12px; } #topNav li { display:inline; float:left; list-style:none; color:#666; border-left: 1px solid #666; padding: 0 3px 0 3px; position:relative; } #topNav ul a { white-space:nowrap; } #topNav li a:hover { border-bottom:2px solid #369; } #topNavSecond a:hover { border-bottom:2px solid transparent !important; } #topNavFirst { border-left: 1px solid transparent !important; } /*****OUR-PEOPLE DROPDOWN*****/ #topNav ul{ background:#fff; border:1px solid #666; border-top:0px solid transparent; border-bottom:2px solid #666; list-style:none; position:absolute; left:-9999px; width:100px; text-align:left; padding:5px 0 5px 0px; margin:0 0 0 -4px; z-index:10; -webkit-box-shadow: 1px 1px 1px #666; -moz-box-shadow: 1px 1px 1px #666; box-shadow: 1px 1px 1px #666; vertical-align: bottom; } #topNav ul li{ display:block; border-left:0px; margin-bottom: 0px; padding:0; vertical-align: bottom; } #topNav ul a{ padding:0 0 0 5px; } #topNav li:hover ul{ left:auto; } #topNav li:hover a { color:#369; } #topNav li:hover ul a{ text-decoration:none; color:#666; } #topNav li:hover ul li a:hover{ color:#fff;; width:100%; border-bottom:0px solid transparent !important; } #topNav ul li:hover { background:#369; display: block; } #topNav ul li a { display: block; padding:0 0 0 4px; } /************/ /*****OUR-SERVICES DROPDOWN*****/ #topNavThird a:hover { border-bottom:2px solid transparent !important; } #topNavThird ul{ /*background:#fff url(images/service-ul-bg.png) no-repeat;*/ width:135px !important; /*margin-left:120px !important;*/ }? here it is working perfectly http://jsfiddle.net/BcWd9/ here is a screen shot of how it looks in IE7. hadfield.andymcnallydesign.co.uk/images/ie7-error.jpg as you can see the ul is appearing to the right of the li and not the left and it is overlaying the top list. I've tried removing white space, but no luck. Any ideas? If one of you can help it would be much appreciated.

    Read the article

  • Logging with log4j on tomcat jruby-rack for a Rails 3 application

    - by John
    I just spent the better part of 3 hours trying to get my Rails application logging with Log4j. I've finally got it working, but I'm not sure if what I did is correct. I tried various methods to no avail until my various last attempt. So I'm really looking for some validation here, perhaps some pointers and tips as well -- anything would be appreciated to be honest. I've summarized all my feeble methods into three attempts below. I'm hoping for some enlightenment on where I went wrong with each attempt -- even if it means I get ripped up. Thanks for the help in advance! System Specs Rails 3.0 Windows Server 2008 Log4j 1.2 Tomact 6.0.29 Java 6 Attempt 1 - Configured Tomcat to Use Log4J I basically followed the guide on the Apache Tomcat website here. The steps are: Create a log4j.properties file in $CATALINA_HOME/lib Download and copy the log4j-x.y.z.jar into $CATALINA_HOME/lib Replace $CATALINA_HOME/bin/tomcat-juli.jar with the tomcat-juli.jar from the Apache Tomcat Extras folder Copy tomcat-juli-adapters.jar from the Apache Tomcat Extras folder into $CATALINA_HOME/lib Delete $CATALINA_BASE/conf/logging.properties Start Tomcat (as a service) Expected Results According to the Guide I should have seen a tomcat.log file in my $CATALINA_BASE/logs folder. Actual Results No tomcat.log Saw three of the standard logs instead jakarta_service_20101231.log stderr_20101231.log stdout_20101231.log Question Shouldn't I have at least seen a tomcat.log file? Attempt 2 - Use default Tomcat logging (commons-logging) Reverted all the changes from the previous setup Modified $CATALINA_BASE/conf/logging.properties by doing the following: Adding a setting for my application in the handlers line: 5rails3.org.apache.juli.FileHandler Adding Handler specific properties 5rails3.org.apache.juli.FileHandler.level = FINE 5rails3.org.apache.juli.FileHandler.directory = ${catalina.base}/logs 5rails3.org.apache.juli.FileHandler.prefix = rails3. Adding Facility specific properties org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/rails3].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/rails3].handlers = 4host-manager.org.apache.juli.FileHandler Modified my web.xml by adding the following context parameter as per the Logging section of the jruby-rack readme (I also modified my warbler.rb accordingly, but I did opted to change the web.xml directly to test things faster). <context-param> <param-name>jruby.rack.logging</param-name> <param-value>commons_logging</param-value> </context-param> Restarted Tomcat Results A log file was created (rails3.log), however there was no log information in the file. Attempt 2A - Use Log4j with existing set up I decided to go Log4j another whirl with this new web.xml setting. Copied the log4j.jar into my WEB-INF/lib folder Created a log4j.properties file and put it into WEB-INF/classes log4j.rootLogger=INFO, R log4j.logger.javax.servlet=DEBUG log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=${catalina.base}/logs/rails3.log log4j.appender.R.MaxFileSize=5036KB log4j.appender.R.MaxBackupIndex=4 log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss} [%t] %-5p %c %x - %m%n Restarted Tomcat Results Same as Attempt 2 NOTE: I used log4j.logger.javax.servlet=DEBUG because I read in the jruby-rack README that all logging output is automatically redirected to the javax.servlet.ServletContext#log method. So I though this would capture it. I was obviously wrong. Question Why didn't this work? Isn't Log4J using the commons_logging API? Attempt 3 - Tried out slf4j (WORKED) A bit uncertain as to why Attempt 2A didn't work, I thought to myself, maybe I can't use commons_logging for the jruby.rack.logging parameter because it's probably not using commons_logging API... (but I was still not sure). I saw slf4j as an option. I have never heard of it and by stroke of luck, I decided to look up what it is. After reading briefly about what it does, I thought it was good of a shot as any and decided to try it out following the instructions here. Continuing from the setup of Attempt 2A: Copied slf4j-api-1.6.1.jar and slf4j-simple-1.6.1.jar into my WEB-INF/lib folder I also copied slf4j-log4j12-1.6.1.jar into my WEB-INF/lib folder Restarted Tomcat And VIOLA! I now have logging information going into my rails3.log file. So the big question is: WTF? Even though logging seems to be working now, I'm really not sure if I did this right. So like I said earlier, I'm really looking for some validation more or less. I'd also appreciate any pointers/tips/advice if you have any. Thanks!

    Read the article

  • Android draw using SurfaceView and Thread

    - by Morten Høgseth
    I am trying to draw a ball to my screen using 3 classes. I have read a little about this and I found a code snippet that works using the 3 classes on one page, Playing with graphics in Android I altered the code so that I have a ball that is moving and shifts direction when hitting the wall like the picture below (this is using the code in the link). Now I like to separate the classes into 3 different pages for not making everything so crowded, everything is set up the same way. Here are the 3 classes I have. BallActivity.java Ball.java BallThread.java package com.brick.breaker; import android.app.Activity; import android.os.Bundle; import android.view.Window; import android.view.WindowManager; public class BallActivity extends Activity { private Ball ball; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,WindowManager.LayoutParams.FLAG_FULLSCREEN); ball = new Ball(this); setContentView(ball); } @Override protected void onPause() { // TODO Auto-generated method stub super.onPause(); setContentView(null); ball = null; finish(); } } package com.brick.breaker; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.view.SurfaceHolder; import android.view.SurfaceView; public class Ball extends SurfaceView implements SurfaceHolder.Callback { private BallThread ballThread = null; private Bitmap bitmap; private float x, y; private float vx, vy; public Ball(Context context) { super(context); // TODO Auto-generated constructor stub bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.ball); x = 50.0f; y = 50.0f; vx = 10.0f; vy = 10.0f; getHolder().addCallback(this); ballThread = new BallThread(getHolder(), this); } protected void onDraw(Canvas canvas) { update(canvas); canvas.drawBitmap(bitmap, x, y, null); } public void update(Canvas canvas) { checkCollisions(canvas); x += vx; y += vy; } public void checkCollisions(Canvas canvas) { if(x - vx < 0) { vx = Math.abs(vx); } else if(x + vx > canvas.getWidth() - getBitmapWidth()) { vx = -Math.abs(vx); } if(y - vy < 0) { vy = Math.abs(vy); } else if(y + vy > canvas.getHeight() - getBitmapHeight()) { vy = -Math.abs(vy); } } public int getBitmapWidth() { if(bitmap != null) { return bitmap.getWidth(); } else { return 0; } } public int getBitmapHeight() { if(bitmap != null) { return bitmap.getHeight(); } else { return 0; } } public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { // TODO Auto-generated method stub } public void surfaceCreated(SurfaceHolder holder) { // TODO Auto-generated method stub ballThread.setRunnable(true); ballThread.start(); } public void surfaceDestroyed(SurfaceHolder holder) { // TODO Auto-generated method stub boolean retry = true; ballThread.setRunnable(false); while(retry) { try { ballThread.join(); retry = false; } catch(InterruptedException ie) { //Try again and again and again } break; } ballThread = null; } } package com.brick.breaker; import android.graphics.Canvas; import android.view.SurfaceHolder; public class BallThread extends Thread { private SurfaceHolder sh; private Ball ball; private Canvas canvas; private boolean run = false; public BallThread(SurfaceHolder _holder,Ball _ball) { sh = _holder; ball = _ball; } public void setRunnable(boolean _run) { run = _run; } public void run() { while(run) { canvas = null; try { canvas = sh.lockCanvas(null); synchronized(sh) { ball.onDraw(canvas); } } finally { if(canvas != null) { sh.unlockCanvasAndPost(canvas); } } } } public Canvas getCanvas() { if(canvas != null) { return canvas; } else { return null; } } } Here is a picture that shows the outcome of these classes. I've tried to figure this out but since I am pretty new to Android development I thought I could ask for help. Does any one know what is causing the ball to be draw like that? The code is pretty much the same as the one in the link and I have tried to experiment to find a solution but no luck. Thx in advance for any help=)

    Read the article

  • How to setup custom DNS with Azure Websites Preview?

    - by husainnz
    I created a new Azure Website, using Umbraco as the CMS. I got a page up and going, and I already have a .co.nz domain with www.domains4less.com. There's a whole lot of stuff on the internet about pointing URLs to Azure, but that seems to be more of a redirection service than anything (i.e. my URLs still use azurewebsites.net once I land on my site). Has anybody had any luck getting it to go? Here's the error I get when I try adding the DNS entry to Azure (I'm in reserved mode, reemdairy is the name of the website): There was an error processing your request. Please try again in a few moments. Browser: 5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5 User language: undefined Portal Version: 6.0.6002.18488 (rd_auxportal_stable.120609-0259) Subscriptions: 3aabe358-d178-4790-a97b-ffba902b2851 User email address: [email protected] Last 10 Requests message: Failure: Ajax call to: Websites/UpdateConfig. failed with status: error (500) in 2.57 seconds. x-ms-client-request-id was: 38834edf-c9f3-46bb-a1f7-b2839c692bcf-2012-06-12 22:25:14Z dateTime: Wed Jun 13 2012 10:25:17 GMT+1200 (New Zealand Standard Time) durationSeconds: 2.57 url: Websites/UpdateConfig status: 500 textStatus: error clientMsRequestId: 38834edf-c9f3-46bb-a1f7-b2839c692bcf-2012-06-12 22:25:14Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com response: {"message":"Try again. Contact support if the problem persists.","ErrorMessage":"Try again. Contact support if the problem persists.","httpStatusCode":"InternalServerError","operationTrackingId":"","stackTrace":null} message: Complete: Ajax call to: Websites/GetConfig. completed with status: success (200) in 1.021 seconds. x-ms-client-request-id was: a0cdcced-13d0-44e2-866d-e0b061b9461b-2012-06-12 22:24:43Z dateTime: Wed Jun 13 2012 10:24:44 GMT+1200 (New Zealand Standard Time) durationSeconds: 1.021 url: Websites/GetConfig status: 200 textStatus: success clientMsRequestId: a0cdcced-13d0-44e2-866d-e0b061b9461b-2012-06-12 22:24:43Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: https://manage.windowsazure.com/Service/OperationTracking?subscriptionId=3aabe358-d178-4790-a97b-ffba902b2851. completed with status: success (200) in 1.887 seconds. x-ms-client-request-id was: a7689fe9-b9f9-4d6c-8926-734ec9a0b515-2012-06-12 22:24:40Z dateTime: Wed Jun 13 2012 10:24:42 GMT+1200 (New Zealand Standard Time) durationSeconds: 1.887 url: https://manage.windowsazure.com/Service/OperationTracking?subscriptionId=3aabe358-d178-4790-a97b-ffba902b2851 status: 200 textStatus: success clientMsRequestId: a7689fe9-b9f9-4d6c-8926-734ec9a0b515-2012-06-12 22:24:40Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: /Service/GetUserSettings. completed with status: success (200) in 0.941 seconds. x-ms-client-request-id was: 805e554d-1e2e-4214-afd5-be87c0f255d1-2012-06-12 22:24:40Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.941 url: /Service/GetUserSettings status: 200 textStatus: success clientMsRequestId: 805e554d-1e2e-4214-afd5-be87c0f255d1-2012-06-12 22:24:40Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/ClusterSuffix. completed with status: success (200) in 0.483 seconds. x-ms-client-request-id was: 85157ceb-c538-40ca-8c1e-5cc07c57240f-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.483 url: Extensions/ApplicationsExtension/SqlAzure/ClusterSuffix status: 200 textStatus: success clientMsRequestId: 85157ceb-c538-40ca-8c1e-5cc07c57240f-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/GetClientIp. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: 2eb194b6-66ca-49e2-9016-e0f89164314c-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/GetClientIp status: 200 textStatus: success clientMsRequestId: 2eb194b6-66ca-49e2-9016-e0f89164314c-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/DefaultServerLocation. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: 1bc165ef-2081-48f2-baed-16c6edf8ea67-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/DefaultServerLocation status: 200 textStatus: success clientMsRequestId: 1bc165ef-2081-48f2-baed-16c6edf8ea67-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/ServerLocations. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: e1fba7df-6a12-47f8-9434-bf17ca7d93f4-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/ServerLocations status: 200 textStatus: success clientMsRequestId: e1fba7df-6a12-47f8-9434-bf17ca7d93f4-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com

    Read the article

  • Extended with advice: Moving block wont work in Javascript

    - by Mack
    Hello Note: this is an extension of a question I just asked, i have made the edits & taken the advice but still no luck I am trying to make a webpage where when you click a link, the link moves diagonally every 100 milliseconds. So I have my Javascript, but right now when I click the link nothing happens. I have run my code through JSLint (therefore changed comaprisions to === not ==, thats weird in JS?). I get this error from JSLink though: Error: Implied global: self 15,38, document 31 What do you think I am doing wrong? <script LANGUAGE="JavaScript" type = "text/javascript"> <!-- var block = null; var clockStep = null; var index = 0; var maxIndex = 6; var x = 0; var y = 0; var timerInterval = 100; // milliseconds var xPos = null; var yPos = null; function moveBlock() { if ( index < 0 || index >= maxIndex || block === null || clockStep === null ) { self.clearInterval( clockStep ); return; } block.style.left = xPos[index] + "px"; block.style.top = yPos[index] + "px"; index++; } function onBlockClick( blockID ) { if ( clockStep !== null ) { return; } block = document.getElementById( blockID ); index = 0; x = number(block.style.left); // parseInt( block.style.left, 10 ); y = number(block.style.top); // parseInt( block.style.top, 10 ); xPos = new Array( x+10, x+20, x+30, x+40, x+50, x+60 ); yPos = new Array( y-10, y-20, y-30, y-40, y-50, y-60 ); clockStep = self.SetInterval( moveBlock(), timerInterval ); } --> </script> <style type="text/css" media="all"> <!-- @import url("styles.css"); #blockMenu { z-index: 0; width: 650px; height: 600px; background-color: blue; padding: 0; } #block1 { z-index: 30; position: relative; top: 10px; left: 10px; background-color: red; width: 200px; height: 200px; margin: 0; padding: 0; /* background-image: url("images/block1.png"); */ } #block2 { z-index: 30; position: relative; top: 50px; left: 220px; background-color: red; width: 200px; height: 200px; margin: 0; padding: 0; /* background-image: url("images/block1.png"); */ } #block3 { z-index: 30; position: relative; top: 50px; left: 440px; background-color: red; width: 200px; height: 200px; margin: 0; padding: 0; /* background-image: url("images/block1.png"); */ } #block4 { z-index: 30; position: relative; top: 0px; left: 600px; background-color: red; width: 200px; height: 200px; margin: 0; padding: 0; /* background-image: url("images/block1.png"); */ } #block1 a { display: block; width: 100%; height: 100%; } #block2 a { display: block; width: 100%; height: 100%; } #block3 a { display: block; width: 100%; height: 100%; } #block4 a { display: block; width: 100%; height: 100%; } #block1 a:hover { background-color: green; } #block2 a:hover { background-color: green; } #block3 a:hover { background-color: green; } #block4 a:hover { background-color: green; } #block1 a:active { background-color: yellow; } #block2 a:active { background-color: yellow; } #block3 a:active { background-color: yellow; } #block4 a:active { background-color: yellow; } --> </style>

    Read the article

  • When spliting MP4s with ffmpeg how do I include metadata?

    - by Josh
    I have a few MP4s that i want to upload to my flickr account but they have a maximum size of 500mb as mine is only about 550 i was planing to simply split them in half then upload them, but i want to make sure all the meta data is included but it does not seem to be. I have tried each of the following with no luck, (at the end of this post i have the original and the new ffprobe outputs): ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_meta_data SANY0069.MP4:SANY0069A.MP4 SANY0069A.MP4 with the this one I manually produced the individual meta tags that i took from this command ffmpeg -i SANY0069A.MP4 -f ffmetadata meta.txt ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -metadata major_brand="mp42" -metadata minor_version="1" -metadata compatible_brands="mp42avc1" -metadata creation_time="2012-09-29 09:05:50" -metadata comment="SANYO DIGITAL CAMERA CA9" -metadata comment-eng="SANYO DIGITAL CAMERA CA9" SANY0069A.MP4 using the output of the former command i also tried this: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -f ffmetadata -i meta.txt SANY0069A.MP4 Output: sample output from my first command: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg version 0.8.12, Copyright (c) 2000-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 File 'SANY0069A.MP4' already exists. Overwrite ? [y/N] y Output #0, mp4, to 'SANY0069A.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 encoder : Lavf53.5.0 Stream #0.0(eng): Video: libx264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 9007 kb/s, 30k tbn, 29.97 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help frame= 7773 fps=4644 q=-1.0 Lsize= 289607kB time=00:04:19.35 bitrate=9147.4kbits/s video:285416kB audio:4033kB global headers:0kB muxing overhead 0.054571% and finaly, when i compare the ffprobe of the original and the first split part i get the 2 following outputs: original ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Split ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069A.MP4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.5.0 comment : SANYO DIGITAL CAMERA CA9 Duration: 00:04:19.37, start: 0.000000, bitrate: 9146 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9015 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 1970-01-01 00:00:00 I know this is incredibly long but its actually a quite simple question. I thought it would be best to provide as much detail as possible. any advice here would be great, Thanks

    Read the article

  • What other tool is using my hotkey?

    - by Sammy
    I use Greenshot for screenshots, and it's been nagging about some other software tool using the same hotkey. I started receiving this warning message about two days ago. It shows up each time I reboot and log on to Windows. The hotkey(s) "Ctrl + Shift + PrintScreen" could not be registered. This problem is probably caused by another tool claiming usage of the same hotkey(s)! You could either change your hotkey settings or deactivate/change the software making use of the hotkey(s). What's this all about? The only software I recently installed is CPU-Z Core Temp Speed Fan HD Tune Epson Print CD NetStress What I would like to know is how to find out what other tool is causing this conflict? Do I really have to uninstall each program, one by one, until there is no conflict anymore? I see no option for customizing any hotkeys in CPU-Z, and according to docs there are only a few keyboard shortcuts. These are F5 through F9, but they are no hotkeys. There is nothing in Core Temp, and from what I can see... nothing in Speed Fan. Is any of these programs known to use Ctrl + Shift + PrintScreen hotkey for screenshots? I am actually suspecting the Dropbox client. I think I saw a warning recently coming from Dropbox program, something to do with hotkeys or keyboard shortcuts. I see that it has an option for sharing screenshots under Preferences menu, but I see no option for hotkeys. Core Temp actually also has an option for taking screenshots (F9) but it's just that - a keyboard shortcut, not a hotkey. And again, there's no option actually for changing this setting in Options/Settings menu. How do you resolve this type of conflicts? Are there any general methods you can use to pinpoint the second conflicting software? Like... is there some Windows registry key that holds the hotkeys? Or is it just down to mere luck and trial and error? Addendum I forgot to mention, when I do use the Ctrl + Shift + PrintScreen hotkey, what happens is that the Greenshot context menu shows up, asking me where I want to save the screenshot. So it appears to be working. But I am still getting the darn warning every time I reboot and log on to Windows?! I actually tried changing the key bindings in Greenshot preferences, but after a reboot it seems to have returned back to the settings I had previously. Update I can't see any hotkey conflicts in the Widnows Hotkey Explorer. The aforementioned hotkey is reserved by Greenshot, and I don't see any other program using the same hotkey binding. But when I went into Greenshot preferences, this is what I discovered. As you can see it's the Greenshot itself that uses the same hotkey twice! I guess that's why no other program was listed above as using this hotkey. But how can Greenshot be so stupid to use the same hotkey more than once? I didn't do this! It's not my fault... I'm not that stupid. This is what it's set to right now: Capture full screen: Ctrl + Skift + Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn Capture last region: Skift + Prntscrn Capture Internet Explorer: Ctrl + Skift + Prntscrn And this is my preferred setting: Capture full screen: Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn Capture last region: Capture Internet Explorer: I don't use any hotkey for "last region" and IE. But when I set this to my liking, as listed here, Greenshot gives me the same warning message, even as I tab through the hotkey entry fields. Sometimes it even gives me the warning when I just click Cancel button. This is really crazy! On the side note... You might have noticed that I have "update check" set to 0 (zero). This is because, in my experience, Greenshot changes all or only some of my preferences back to default settings whenever it automatically updates to a new version. So I opted to stay off updates to get rid of the problem. It has done so for the past three updates or so. I hoped to receive a new update that would fix the issue, but I think it still reverts back to default settings after each update to a new version, including setting default hotkeys. Update 2 I'll give you just one example of how Greenshot behaves. This is the dialog I have in front of me right now. As you can see, I have removed the last two hotkeys and changed the first one to my own liking. While I was clicking in the fields and removing the two hotkeys I was getting the warning message. So let's say I click in the "capture last region" field. Then I get this: Note that none of the entries include "Ctrl + Shift + PrintScreen" that it's warning about. Now I will change all the hotkeys so I get something like this: So now I'm using QWERTY letters for binding, like Ctrl+Alt+Q, Ctrl+Alt+W and so on. As far as I know no Windows program is using these. While I was clicking through the different fields it was giving me the warning. Now when I try to click OK to save the changes, it once again gives me a warning about "ctrl + shift + printscreen". Update 3 After setting the above key bindings (QWERTY) and saving changes, and then rebooting, the conflict seems to have been resolved. I was then able to set following key bindings. Capture full screen: Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn I was not prompted with the warning message this time. Perhaps changing key binding required a system reboot? Sounds far fetched but that appears to be the case. I'm still not sure what caused this conflict, but I know for sure that it started after installing aforementioned programs. It might just have to do with Greenshot itself, and not some other program. Like I said, I know from experience that Greenshot likes to mess with users' settings after each update. I wouldn't be surprised if it was actually silently updated, even though I have specified not to check for updates, then it changed the key bindings back to defaults and caused a conflict with the hotkeys that were registered with the operating system previously. I rarely reboot the system, so that could have added to the conflict. Next time if I see this I will run Hotkey Explorer immediately and see if there is another program causing the conflict.

    Read the article

  • OpenGL + cgFX Alpha Blending failure

    - by dopplex
    I have a shader that needs to additively blend to its output render target. While it had been fully implemented and working, I recently refactored and have done something that is causing the alpha blending to not work anymore. I'm pretty sure that the problem is somewhere in my calls to either OpenGL or cgfx - but I'm currently at a loss for where exactly the problem is, as everything looks like it is set up properly for alpha blending to occur. No OpenGL or cg framework errors are showing up, either. For some context, what I'm doing here is taking a buffer which contains screen position and luminance values for each pixel, copying it to a PBO, and using it as the vertex buffer for drawing GL_POINTS. Everything except for the alpha blending appears to be working as expected. I've confirmed both that the input vertex buffer has the correct values, and that my vertex and fragment shaders are outputting the points to the correct locations and with the correct luminance values. The way that I've arrived at the conclusion that the Alpha blending was broken is by making my vertex shader output every point to the same screen location and then setting the pixel shader to always output a value of float4(0.5) for that pixel. Invariably, the end color (dumped afterwards) ends up being float4(0.5). The confusing part is that as far as I can tell, everything is properly set for alpha blending to occur. The cgfx pass has the two following state assignments (among others - I'll put a full listing at the end): BlendEnable = true; BlendFunc = int2(One, One); This ought to be enough, since I am calling cgSetPassState() - and indeed, when I use glGets to check the values of GL_BLEND_SRC, GL_BLEND_DEST, GL_BLEND, and GL_BLEND_EQUATION they all look appropriate (GL_ONE, GL_ONE, GL_TRUE, and GL_FUNC_ADD). This check was done immediately after the draw call. I've been looking around to see if there's anything other than blending being enabled and the blending function being correctly set that would cause alpha blending not to occur, but without any luck. I considered that I could be doing something wrong with GL, but GL is telling me that blending is enabled. I doubt it's cgFX related (as otherwise the GL state wouldn't even be thinking it was enabled) but it still fails if I explicitly use GL calls to set the blend mode and enable it. Here's the trimmed down code for starting the cgfx pass and the draw call: CGtechnique renderTechnique = Filter->curTechnique; TEXUNITCHECK; CGpass pass = cgGetFirstPass(renderTechnique); TEXUNITCHECK; while (pass) { cgSetPassState(pass); cgUpdatePassParameters(pass); //drawFSPointQuadBuff((void*)PointQuad); drawFSPointQuadBuff((void*)LumPointBuffer); TEXUNITCHECK; cgResetPassState(pass); pass = cgGetNextPass(pass); }; and the function with the draw call: void drawFSPointQuadBuff(void* args) { PointBuffer* pointBuffer = (PointBuffer*)args; FBOERRCHECK; glClear(GL_COLOR_BUFFER_BIT); GLERRCHECK; glPointSize(1.0); GLERRCHECK; glEnableClientState(GL_VERTEX_ARRAY); GLERRCHECK; glEnable(GL_POINT_SMOOTH); if (pointBuffer-BufferObject) { glBindBufferARB(GL_ARRAY_BUFFER_ARB, (unsigned int)pointBuffer-BufData); glVertexPointer(pointBuffer-numComp, GL_FLOAT, 0, 0); } else { glVertexPointer(pointBuffer-numComp, GL_FLOAT, 0, pointBuffer-BufData); }; GLERRCHECK; glDrawArrays(GL_POINTS, 0, pointBuffer-numElem); GLboolean testBool; glGetBooleanv(GL_BLEND, &testBool); int iblendColor, iblendDest, iblendEquation, iblendSrc; glGetIntegerv(GL_BLEND_SRC, &iblendSrc); glGetIntegerv(GL_BLEND_DST, &iblendDest); glGetIntegerv(GL_BLEND_EQUATION, &iblendEquation); if (iblendEquation == GL_FUNC_ADD) { cerr << "Correct func" << endl; }; GLERRCHECK; if (pointBuffer-BufferObject) { glBindBufferARB(GL_ARRAY_BUFFER_ARB,0); } GLERRCHECK; glDisableClientState(GL_VERTEX_ARRAY); GLERRCHECK; }; Finally, here is the full state setting of the shader: AlphaTestEnable = false; DepthTestEnable = false; DepthMask = false; ColorMask = true; CullFaceEnable = false; BlendEnable = true; BlendFunc = int2(One, One); FragmentProgram = compile glslf std_PS(); VertexProgram = compile glslv bilatGridVS2();

    Read the article

  • TStringList and TThread that does not free all of its memory

    - by VanillaH
    Version used: Delphi 7. I'm working on a program that does a simple for loop on a Virtual ListView. The data is stored in the following record: type TList=record Item:Integer; SubItem1:String; SubItem2:String; end; Item is the index. SubItem1 the status of the operations (success or not). SubItem2 the path to the file. The for loop loads each file, does a few operations and then, save it. The operations take place in a TStringList. Files are about 2mb each. Now, if I do the operations on the main form, it works perfectly. Multi-threaded, there is a huge memory problem. Somehow, the TStringList doesn't seem to be freed completely. After 3-4k files, I get an EOutofMemory exception. Sometimes, the software is stuck to 500-600mb, sometimes not. In any case, the TStringList always return an EOutofMemory exception and no file can be loaded anymore. On computers with more memory, it takes longer to get the exception. The same thing happens with other components. For instance, if I use THTTPSend from Synapse, well, after a while, the software cannot create any new threads because the memory consumption is too high. It's around 500-600mb while it should be, max, 100mb. On the main form, everything works fine. I guess the mistake is on my side. Maybe I don't understand threads enough. I tried to free everything on the Destroy event. I tried FreeAndNil procedure. I tried with only one thread at a time. I tried freeing the thread manually (no FreeOnTerminate...) No luck. So here is the thread code. It's only the basic idea; not the full code with all the operations. If I remove the LoadFile prodecure, everything works good. A thread is created for each file, according to a thread pool. unit OperationsFiles; interface uses Classes, SysUtils, Windows; type TOperationFile = class(TThread) private Position : Integer; TPath, StatusMessage: String; FileStringList: TStringList; procedure UpdateStatus; procedure LoadFile; protected procedure Execute; override; public constructor Create(Path: String; LNumber: Integer); end; implementation uses Form1; procedure TOperationFile.LoadFile; begin try FileStringList.LoadFromFile(TPath); // Operations... StatusMessage := 'Success'; except on E : Exception do StatusMessage := E.ClassName; end; end; constructor TOperationFile.Create(Path : String; LNumber: Integer); begin inherited Create(False); TPath := Path; Position := LNumber; FreeOnTerminate := True; end; procedure TOperationFile.UpdateStatus; begin FileList[Position].SubItem1 := StatusMessage; Form1.ListView4.UpdateItems(Position,Position); end; procedure TOperationFile.Execute; begin FileStringList:= TStringList.Create; LoadFile; Synchronize(UpdateStatus); FileStringList.Free; end; end. What could be the problem? I thought at one point that, maybe, too many threads are created. If a user loads 1 million files, well, ultimately, 1 million threads is going to be created -- although, only 50 threads are created and running at the same time. Thanks for your input.

    Read the article

  • Trying to draw textured triangles on device fails, but the emulator works. Why?

    - by Dinedal
    I have a series of OpenGL-ES calls that properly render a triangle and texture it with alpha blending on the emulator (2.0.1). When I fire up the same code on an actual device (Droid 2.0.1), all I get are white squares. This suggests to me that the textures aren't loading, but I can't figure out why they aren't loading. All of my textures are 32-bit PNGs with alpha channels, under res/raw so they aren't optimized per the sdk docs. Here's how I am loading my textures: private void loadGLTexture(GL10 gl, Context context, int reasource_id, int texture_id) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), reasource_id, sBitmapOptions); //Generate one texture pointer... gl.glGenTextures(1, textures, texture_id); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[texture_id]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //Clean up bitmap.recycle(); } Here's how I am rendering the texture: //Clear gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); //Enable vertex buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); //Push transformation matrix gl.glPushMatrix(); //Transformation matrices gl.glTranslatef(x, y, 0.0f); gl.glScalef(scalefactor, scalefactor, 0.0f); gl.glColor4f(1.0f,1.0f,1.0f,1.0f); //Bind the texture gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[textureid]); //Draw the vertices as triangles gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); //Pop the matrix back to where we left it gl.glPopMatrix(); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); And here are the options I have enabled: gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing gl.glDepthFunc(GL10.GL_LEQUAL); //The Type Of Depth Testing To Do gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA,GL10.GL_ONE_MINUS_SRC_ALPHA); Edit: I just tried supplying a BitmapOptions to the BitmapFactory.decodeResource() call, but this doesn't seem to fix the issue, despite manually setting the same preferredconfig, density, and targetdensity. Edit2: As requested, here is a screenshot of the emulator working. The underlaying triangles are shown with a circle texture rendered onto it, the transparency is working because you can see the black background. Here is a shot of what the droid does with the exact same code on it: Edit3: Here are my BitmapOptions, updated the call above with how I am now calling the BitmapFactory, still the same results as below: sBitmapOptions.inPreferredConfig = Bitmap.Config.RGB_565; sBitmapOptions.inDensity = 160; sBitmapOptions.inTargetDensity = 160; sBitmapOptions.inScreenDensity = 160; sBitmapOptions.inDither = false; sBitmapOptions.inSampleSize = 1; sBitmapOptions.inScaled = false; Here are my vertices, texture coords, and indices: /** The initial vertex definition */ private static final float vertices[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f }; /** The initial texture coordinates (u, v) */ private static final float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; /** The initial indices definition */ private static final byte indices[] = { //Faces definition 0,1,3, 0,3,2 }; Is there anyway to dump the contents of the texture once it's been loaded into OpenGL ES? Maybe I can compare the emulator's loaded texture with the actual device's loaded texture? I did try with a different texture (the default android icon) and again, it works fine for the emulator but fails to render on the actual phone. Edit4: Tried switching around when I do texture loading. No luck. Tried using a constant offset of 0 to glGenTextures, no change. Is there something that I'm using that the emulator supports that the actual phone does not? Edit5: Per Ryan below, I resized my texture from 200x200 to 256x256, and the issue was NOT resolved. Edit: As requested, added the calls to glVertexPointer and glTexCoordPointer above. Also, here is the initialization of vertexBuffer, textureBuffer, and indexBuffer: ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); loadGLTextures(gl, this.context);

    Read the article

  • Why does decorating a class break the descriptor protocol, thus preventing staticmethod objects from behaving as expected?

    - by Robru
    I need a little bit of help understanding the subtleties of the descriptor protocol in Python, as it relates specifically to the behavior of staticmethod objects. I'll start with a trivial example, and then iteratively expand it, examining it's behavior at each step: class Stub: @staticmethod def do_things(): """Call this like Stub.do_things(), with no arguments or instance.""" print "Doing things!" At this point, this behaves as expected, but what's going on here is a bit subtle: When you call Stub.do_things(), you are not invoking do_things directly. Instead, Stub.do_things refers to a staticmethod instance, which has wrapped the function we want up inside it's own descriptor protocol such that you are actually invoking staticmethod.__get__, which first returns the function that we want, and then gets called afterwards. >>> Stub <class __main__.Stub at 0x...> >>> Stub.do_things <function do_things at 0x...> >>> Stub.__dict__['do_things'] <staticmethod object at 0x...> >>> Stub.do_things() Doing things! So far so good. Next, I need to wrap the class in a decorator that will be used to customize class instantiation -- the decorator will determine whether to allow new instantiations or provide cached instances: def deco(cls): def factory(*args, **kwargs): # pretend there is some logic here determining # whether to make a new instance or not return cls(*args, **kwargs) return factory @deco class Stub: @staticmethod def do_things(): """Call this like Stub.do_things(), with no arguments or instance.""" print "Doing things!" Now, naturally this part as-is would be expected to break staticmethods, because the class is now hidden behind it's decorator, ie, Stub not a class at all, but an instance of factory that is able to produce instances of Stub when you call it. Indeed: >>> Stub <function factory at 0x...> >>> Stub.do_things Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'function' object has no attribute 'do_things' >>> Stub() <__main__.Stub instance at 0x...> >>> Stub().do_things <function do_things at 0x...> >>> Stub().do_things() Doing things! So far I understand what's happening here. My goal is to restore the ability for staticmethods to function as you would expect them to, even though the class is wrapped. As luck would have it, the Python stdlib includes something called functools, which provides some tools just for this purpose, ie, making functions behave more like other functions that they wrap. So I change my decorator to look like this: def deco(cls): @functools.wraps(cls) def factory(*args, **kwargs): # pretend there is some logic here determining # whether to make a new instance or not return cls(*args, **kwargs) return factory Now, things start to get interesting: >>> Stub <function Stub at 0x...> >>> Stub.do_things <staticmethod object at 0x...> >>> Stub.do_things() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'staticmethod' object is not callable >>> Stub() <__main__.Stub instance at 0x...> >>> Stub().do_things <function do_things at 0x...> >>> Stub().do_things() Doing things! Wait.... what? functools copies the staticmethod over to the wrapping function, but it's not callable? Why not? What did I miss here? I was playing around with this for a bit and I actually came up with my own reimplementation of staticmethod that allows it to function in this situation, but I don't really understand why it was necessary or if this is even the best solution to this problem. Here's the complete example: class staticmethod(object): """Make @staticmethods play nice with decorated classes.""" def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): """Provide the expected behavior inside decorated classes.""" return self.func(*args, **kwargs) def __get__(self, obj, objtype=None): """Re-implement the standard behavior for undecorated classes.""" return self.func def deco(cls): @functools.wraps(cls) def factory(*args, **kwargs): # pretend there is some logic here determining # whether to make a new instance or not return cls(*args, **kwargs) return factory @deco class Stub: @staticmethod def do_things(): """Call this like Stub.do_things(), with no arguments or instance.""" print "Doing things!" Indeed it works exactly as expected: >>> Stub <function Stub at 0x...> >>> Stub.do_things <__main__.staticmethod object at 0x...> >>> Stub.do_things() Doing things! >>> Stub() <__main__.Stub instance at 0x...> >>> Stub().do_things <function do_things at 0x...> >>> Stub().do_things() Doing things! What approach would you take to make a staticmethod behave as expected inside a decorated class? Is this the best way? Why doesn't the builtin staticmethod implement __call__ on it's own in order for this to just work without any fuss? Thanks.

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Synchronize Data between a Silverlight ListBox and a User Control

    - by psheriff
    One of the great things about XAML is the powerful data-binding capabilities. If you load up a list box with a collection of objects, you can display detail data about each object without writing any C# or VB.NET code. Take a look at Figure 1 that shows a collection of Product objects in a list box. When you click on a list box you bind the current Product object selected in the list box to a set of controls in a user control with just a very simple Binding statement in XAML.  Figure 1: Synchronizing a ListBox to a User Control is easy with Data Binding Product and Products Classes To illustrate this data binding feature I am going to just create some local data instead of using a WCF service. The code below shows a Product class that has three properties, namely, ProductId, ProductName and Price. This class also has a constructor that takes 3 parameters and allows us to set the 3 properties in an instance of our Product class. C#public class Product{  public Product(int productId, string productName, decimal price)  {    ProductId = productId;    ProductName = productName;    Price = price;  }   public int ProductId { get; set; }  public string ProductName { get; set; }  public decimal Price { get; set; }} VBPublic Class Product  Public Sub New(ByVal _productId As Integer, _                 ByVal _productName As String, _                 ByVal _price As Decimal)    ProductId = _productId    ProductName = _productName    Price = _price  End Sub   Private mProductId As Integer  Private mProductName As String  Private mPrice As Decimal   Public Property ProductId() As Integer    Get      Return mProductId    End Get    Set(ByVal value As Integer)      mProductId = value    End Set  End Property   Public Property ProductName() As String    Get      Return mProductName    End Get    Set(ByVal value As String)      mProductName = value    End Set  End Property   Public Property Price() As Decimal    Get      Return mPrice    End Get    Set(ByVal value As Decimal)      mPrice = value    End Set  End PropertyEnd Class To fill up a list box you need a collection class of Product objects. The code below creates a generic collection class of Product objects. In the constructor of the Products class I have hard-coded five product objects and added them to the collection. In a real-world application you would get your data through a call to service to fill the list box, but for simplicity and just to illustrate the data binding, I am going to just hard code the data. C#public class Products : List<Product>{  public Products()  {    this.Add(new Product(1, "Microsoft VS.NET 2008", 1000));    this.Add(new Product(2, "Microsoft VS.NET 2010", 1000));    this.Add(new Product(3, "Microsoft Silverlight 4", 1000));    this.Add(new Product(4, "Fundamentals of N-Tier eBook", 20));    this.Add(new Product(5, "ASP.NET Security eBook", 20));  }} VBPublic Class Products  Inherits List(Of Product)   Public Sub New()    Me.Add(New Product(1, "Microsoft VS.NET 2008", 1000))    Me.Add(New Product(2, "Microsoft VS.NET 2010", 1000))    Me.Add(New Product(3, "Microsoft Silverlight 4", 1000))    Me.Add(New Product(4, "Fundamentals of N-Tier eBook", 20))    Me.Add(New Product(5, "ASP.NET Security eBook", 20))  End SubEnd Class The Product Detail User Control Below is a user control (named ucProduct) that is used to display the product detail information seen in the bottom portion of Figure 1. This is very basic XAML that just creates a text block and a text box control for each of the three properties in the Product class. Notice the {Binding Path=[PropertyName]} on each of the text box controls. This means that if the DataContext property of this user control is set to an instance of a Product class, then the data in the properties of that Product object will be displayed in each of the text boxes. <UserControl x:Class="SL_SyncListBoxAndUserControl_CS.ucProduct"  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"  HorizontalAlignment="Left"  VerticalAlignment="Top">  <Grid Margin="4">    <Grid.RowDefinitions>      <RowDefinition Height="Auto" />      <RowDefinition Height="Auto" />      <RowDefinition Height="Auto" />    </Grid.RowDefinitions>    <Grid.ColumnDefinitions>      <ColumnDefinition MinWidth="120" />      <ColumnDefinition />    </Grid.ColumnDefinitions>    <TextBlock Grid.Row="0"               Grid.Column="0"               Text="Product Id" />    <TextBox Grid.Row="0"             Grid.Column="1"             Text="{Binding Path=ProductId}" />    <TextBlock Grid.Row="1"               Grid.Column="0"               Text="Product Name" />    <TextBox Grid.Row="1"             Grid.Column="1"             Text="{Binding Path=ProductName}" />    <TextBlock Grid.Row="2"               Grid.Column="0"               Text="Price" />    <TextBox Grid.Row="2"             Grid.Column="1"             Text="{Binding Path=Price}" />  </Grid></UserControl> Synchronize ListBox with User Control You are now ready to fill the list box with the collection class of Product objects and then bind the SelectedItem of the list box to the Product detail user control. The XAML below is the complete code for Figure 1. <UserControl x:Class="SL_SyncListBoxAndUserControl_CS.MainPage"  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"  xmlns:src="clr-namespace:SL_SyncListBoxAndUserControl_CS"  VerticalAlignment="Top"  HorizontalAlignment="Left">  <UserControl.Resources>    <src:Products x:Key="productCollection" />  </UserControl.Resources>  <Grid x:Name="LayoutRoot"        Margin="4"        Background="White">    <Grid.RowDefinitions>      <RowDefinition Height="Auto" />      <RowDefinition Height="*" />    </Grid.RowDefinitions>    <ListBox x:Name="lstData"             Grid.Row="0"             BorderBrush="Black"             BorderThickness="1"             ItemsSource="{Binding                   Source={StaticResource productCollection}}"             DisplayMemberPath="ProductName" />    <src:ucProduct x:Name="prodDetail"                   Grid.Row="1"                   DataContext="{Binding ElementName=lstData,                                          Path=SelectedItem}" />  </Grid></UserControl> The first step to making this happen is to reference the Silverlight project (SL_SyncListBoxAndUserControl_CS) where the Product and Products classes are located. I added this namespace and assigned it a namespace prefix of “src” as shown in the line below: xmlns:src="clr-namespace:SL_SyncListBoxAndUserControl_CS" Next, to use the data from an instance of the Products collection, you create a UserControl.Resources section in the XAML and add a tag that creates an instance of the Products class and assigns it a key of “productCollection”.   <UserControl.Resources>    <src:Products x:Key="productCollection" />  </UserControl.Resources> Next, you bind the list box to this productCollection object using the ItemsSource property. You bind the ItemsSource of the list box to the static resource named productCollection. You can then set the DisplayMemberPath attribute of the list box to any property of the Product class that you want. In the XAML below I used the ProductName property. <ListBox x:Name="lstData"         ItemsSource="{Binding             Source={StaticResource productCollection}}"         DisplayMemberPath="ProductName" /> You now need to create an instance of the ucProduct user contol below the list box. You do this by once again referencing the “src” namespace and typing in the name of the user control. You then set the DataContext property on this user control to a binding. The binding uses the ElementName attribute to bind to the list box name, in this case “lstData”. The Path of the data is SelectedItem. These two attributes together tell Silverlight to bind the DataContext to the selected item of the list box. That selected item is a Product object. So, once this is bound, the bindings on each text box in the user control are updated and display the current product information. <src:ucProduct x:Name="prodDetail"               DataContext="{Binding ElementName=lstData,                                      Path=SelectedItem}" /> Summary Once you understand the basics of data binding in XAML, you eliminate a lot code that is otherwise needed to move data into controls and out of controls back into an object. Connecting two controls together is easy by just binding using the ElementName and Path properties of the Binding markup extension. Another good tip out of this blog is use user controls and set the DataContext of the user control to have all of the data on the user control update through the bindings. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "SL – Synchronize List Box Data with User Control" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • CodePlex Daily Summary for Monday, January 17, 2011

    CodePlex Daily Summary for Monday, January 17, 2011Popular ReleasesBloodSim: BloodSim - 1.3.3.1: - Priority update to resolve a bug that was causing Boss damage to ignore Blood Shields entirelyfluentAOP - A Fluent AOP Library for .NET: fluentAOP.1.0.13.bin: fluentAOP.1.0.13.binXsltDb - DotNetNuke Module Builder: 02.00.36: New FeaturesAdvanced database caching. XsltDb now supports SqlCacheDependency. This great feature allows minimizing database load for heavily loaded site pages as home page, latest news, etc. mdo:sql and mdo:xml supports JSON output formats $json each recordset is converted to array of JSON records $jsonrow – first record of first recordset is converted to JSON object mdo:header section. You now able to inject anything you need at page header, top of the form or top of the module. mdo...Rawr: Rawr 4.0.16 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of this release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. W...MvcContrib: an Outer Curve Foundation project: MVC 3 - 3.0.51.0: Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcContrib.TestHelper.dll MvcContrib.Extras.Release.zip T4MVC. The extra view engines / controller factories and other functionality which is in the project. This file includes the main MvcContrib assembly. Samples are included in the release. You do not need MvcContrib if you download the Extras.Yahoo! UI Library: YUI Compressor for .Net: Version 1.5.0.0 - Jalthi: Updated solution to VS2010. New: Work Item #4450 - Optional MSBuild task parameter :: Do not error if no files were found. Fixed: Work Item #5028 - Output file encoding is the same as the optional MSBuild task encoding argument. Fixed: Work Item #5824 - MSBuilds where slow, after the first build due to the Current Thread being forced to en-gb, on none en-gb systems. Changed: Work Item #6873 - Project license changed from MS-PL to GPLv2. New: Added all the unit tests from the Java YU...N2 CMS: 2.1.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. 2.1.1 Maintenance release List of changes 2.1 Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open...TweetSharp: TweetSharp v2.0.0.0 - Preview 8: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 8 ChangesUpgraded library references to address reported regressions Third Party Library VersionsHammock v1.1.6: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comVidCoder: 0.8.1: Adds ability to choose an arbitrary range (in seconds or frames) to encode. Adds ability to override the title number in the output file name when enqueing multiple titles. Updated presets: Added iPhone 4, Apple TV 2, fixed some existing presets that should have had weightp=0 or trellis=0 on them. Added {parent} option to auto-name format. Use {parent:2} to refer to a folder 2 levels above the input file. Added {title:2} option to auto-name format. Adds leading zeroes to reach the sp...Microsoft SQL Server Product Samples: Database: AdventureWorks2008R2 without filestream: This download contains a version of the AdventureWorks2008R2 OLTP database without FILESTREAM properties. You do not need to have filestream enabled to attach this database. No additional schema or data changes have been made. To install the version of AdventureWorks2008R2 that includes filestream, use the SR1 installer available here. Prerequisites: Microsoft SQL Server 2008 R2 must be installed. Full-Text Search must be enabled. Installing the AdventureWorks2008R2 OLTP database: 1. Cl...NuGet: NuGet 1.0 RTM: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog.MVC Music Store: MVC Music Store v2.0: This is the 2.0 release of the MVC Music Store Tutorial. This tutorial is updated for ASP.NET MVC 3 and Entity Framework Code-First, and contains fixes and improvements based on feedback and common questions from previous releases. The main download, MvcMusicStore-v2.0.zip, contains everything you need to build the sample application, including A detailed tutorial document in PDF format Assets you will need to build the project, including images, a stylesheet, and a pre-populated databas...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 GA Released: Hi, Today we are releasing Visifire 3.6.7 GA with the following feature: * Inlines property has been implemented in Title. Also, this release contains fix for the following bugs: * In Column and Bar chart DataPoint’s label properties were not working as expected at real-time if marker enabled was set to true. * 3D Column and Bar chart were not rendered properly if AxisMinimum property was set in x-axis. You can download Visifire v3.6.7 here. Cheers, Team VisifireASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: paging for the lookup lookup with multiselect changes: the css classes used by the framework where renamed to be more standard the lookup controller requries an item.ascx (no more ViewData["structure"]), and LookupList action renamed to Search all the...??????????: All-In-One Code Framework ??? 2011-01-12: 2011???????All-In-One Code Framework(??) 2011?1??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, AJAX, WinForm, Windows Shell????13?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2011/01/13/6135037.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????patterns & practices – Enterprise Library: Enterprise Library 5.0 - Extensibility Labs: This is a preview release of the Hands-on Labs to help you learn and practice different ways the Enterprise Library can be extended. Learning MapCustom exception handler (estimated time to complete: 1 hr 15 mins) Custom logging trace listener (1 hr) Custom configuration source (registry-based) (30 mins) System requirementsEnterprise Library 5.0 / Unity 2.0 installed SQL Express 2008 installed Visual Studio 2010 Pro (or better) installed AuthorsChris Tavares, Microsoft Corporation ...Orchard Project: Orchard 1.0: Orchard Release Notes Build: 1.0.20 Published: 1/12/2010 How to Install OrchardTo install Orchard using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.1.0.20.zip file. http://orchardproject.net/docs/Manually-installing-Orchard-zip-file.ashx The zip contents are pre-built and ready-to-run...Umbraco CMS: Umbraco 4.6.1: The Umbraco 4.6.1 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're ...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...Facebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcNew ProjectsAddition/Multiplication: It's A simple application developed in <programming language> Use this to perform large but simple Addition and MultiplicationCustom Paging API: This dll contains custom paging method. you can implement this custom paging easily in to your project DanubeSalsaConnection: DanubeSalsaConnectionDNN Menu Provider: You think working with standard DotNetNuke menu providers is a pain? You don't want to study XSLT befor skinning your menu? You are looking for a simple as well as powerful solution? Well, then you are exactly right here! Ejemplos WP7 Silverlight: Los ejemplos en silverlight para WP7 pueden servir de base de las aplicaciones más comunes. Todos los ejemplos son en lenguaje C#.GPS Nerd: Technologies: - ASP.NET MVC, MySQL, NHibernate, Ninject, Google Maps API The GPS Nerd project is a meant to be a platform to learn how to build a website by pulling in various technologies. More Info: See the documentation. Live Site: http://gpsnerd.comNetLirc: .NET LIRC Server with PIC16F628A IR sensornForum For Umbraco: This is the beginning of a forum package for Umbraco, it is built purely using Linq2umbraco so if your an XSLT ninja you are out of luck at the moment. more details to come soon...Orchard Feeds Aggregator Module: Feeds Aggregator Module for Orchard CMSSilverlight for Windows Phone Performance Samples: A collection of Silverlight for Windows Phone code samples highlighting good coding practices.SilvyAcc - Silverlight Accordion Control: This is custom made Silverlight Accordion control.Twesh Ajax: TweshAjax is clientside javascript library for making asynchronous calls to server.WikiPageContentPopulator: This Project allows SharePoint 2010 developers to add wiki page content during the installation of the solution via Feature Receiver WPaolo7: Programmi WPaolo7WPFEdit: WPFEdit is a starter kit for applications that edit files. It contains easy-to-use starter controls and a document management system. WPFEdit is developed with C# / .NET 4.0 / WPF.

    Read the article

< Previous Page | 142 143 144 145 146 147 148  | Next Page >