Search Results

Search found 13093 results on 524 pages for 'android widget'.

Page 510/524 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • Cant get description rss tag data with javascript

    - by AdamB
    I'm currently making a widget to take and display items from a feed. I have this working for the most part, but for some reason the data within the tag within the item comes back as empty, but I get the data in the and tags no problem. feed is and xmlhttp.responseXML object. var items = feed.getElementsByTagName("item"); for (var i=0; i<10; i++){ container = document.getElementById('list'); new_element = document.createElement('li'); title = items[i].getElementsByTagName("title")[0].firstChild.nodeValue; link = items[i].getElementsByTagName("link")[0].firstChild.nodeValue; alert(items[i].getElementsByTagName("description")[0].firstChild.nodeValue); new_element.innerHTML = "<a href=\""+link+"\">"+title+"</a> "; container.insertBefore(new_element, container.firstChild); } I have no idea why it wouldn't be working for the tag and would be for the other tags. Here is an example of the rss feed its trying to parse: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"> <channel> <title>A title</title> <link>http://linksomehwere</link> <description>The title of the feed</description> <language>en-us</language> <item> <pubDate>Fri, 10 Jul 2009 11:34:49 -0500</pubDate> <title>Awesome Title</title> <link>http://link/to/thing</link> <guid>http://link/to/thing</guid> <description> <![CDATA[ <p>some html crap</p> blah blah balh ]]> </description> </item> </channel> </rss>

    Read the article

  • What's wrong with this code? Values not saved to db

    - by Scott B
    Been trying to get this code to work for several days now to no avail. I'm at wits end. I've managed, with the code below, to create a customized category picker widget that appears on the PAGE editor. However, for the life of me, I cannot get the checked categories to save. function my_post_options_box() { if ( function_exists('add_meta_box') ) { //add_meta_box( $id, $title, $callback, $page, $context, $priority ); add_meta_box('categorydiv', __('Page Options'), 'post_categories_meta_box_modified', 'page', 'side', 'core'); } } //adds the custom categories box function post_categories_meta_box_modified($post) { $noindexCat = get_cat_ID('noindex'); $nofollowCat = get_cat_ID('nofollow'); if(in_category("noindex")){ $noindexChecked = " checked='checked'";} else {$noindexChecked = "";} if(in_category("nofollow")){ $nofollowChecked = " checked='checked'";} else {$noindexChecked = "";} ?> <div id="categories-all" class="ui-tabs-panel"> <ul id="categorychecklist" class="list:category categorychecklist form-no-clear"> <li id='category-<?php echo $noindexCat ?>' class="popular-category"><label class="selectit"><input value="<?php echo $noindexCat ?>" type="checkbox" name="post_category[]" id="in-category-<?php echo $noindexCat ?>"<?php echo $noindexChecked ?> /> noindex</label></li> <li id='category-<?php echo $nofollowCat ?>' class="popular-category"><label class="selectit"><input value="<?php echo $noindexCat ?>" type="checkbox" name="post_category[]" id="in-category-<?php echo $nofollowCat ?>"<?php echo $nofollowChecked ?> /> nofollow</label></li> <li id='category-1' class="popular-category"><label class="selectit"><input value="1" type="checkbox" name="post_category[]" id="in-category-1" checked="checked"/> Uncategorized</label></li> </ul> </div> <?php }

    Read the article

  • Insane CPU usage in QT 5.0

    - by GravityScore
    I'm having trouble using the QT framework, particularly with the paintEvent of QWidget. I have a QWidget set up, and am overriding the paintEvent of it. I need to render a bunch of rectangles (grid system), 51 by 19, leading to 969 rectangles being drawn. This is done in a for loop. Then I also need to draw an image on each on of these grids. The QWidget is added to a QMainWindow, which is shown. This works nicely, but it's using up 47% of CPU per window open! And I want to allow the user to open multiple windows like this, likey having 3-4 open at a time, which puts the CPU close to 150%. Why does this happen? Here is the paintEvent contents. The JNI calls don't cause the CPU usage, commenting them out doesn't lower it, but commenting out the p.fillRect and Renderer::renderString (which draws the image) lowers the CPU to about 5%. // Background QPainter p(this); p.fillRect(0, 0, this->width(), this->height(), QBrush(QColor(0, 0, 0))); // Lines for (int y = 0; y < Global::terminalHeight; y++) { // Line and color method ID jmethodID lineid = Manager::jenv->GetMethodID(this->javaClass, "getLine", "(I)Ljava/lang/String;"); error(); jmethodID colorid = Manager::jenv->GetMethodID(this->javaClass, "getColorLine", "(I)Ljava/lang/String;"); error(); // Values jstring jl = (jstring) Manager::jenv->CallObjectMethod(this->javaObject, lineid, jint(y)); error(); jstring cjl = (jstring) Manager::jenv->CallObjectMethod(this->javaObject, colorid, jint(y)); error(); // Convert to C values const char *l = Manager::jenv->GetStringUTFChars(jl, 0); const char *cl = Manager::jenv->GetStringUTFChars(cjl, 0); QString line = QString(l); QString color = QString(cl); // Render line for (int x = 0; x < Global::terminalWidth; x++) { QColor bg = Renderer::colorForHex(color.mid(x + color.length() / 2, 1)); // Cell location on widget int cellx = x * Global::cellWidth + Global::xoffset; int celly = y * Global::cellHeight + Global::yoffset; // Background p.fillRect(cellx, celly, Global::cellWidth, Global::cellHeight, QBrush(bg)); // String // Renders the image to the grid Renderer::renderString(p, tc, text, cellx, celly); } // Release Manager::jenv->ReleaseStringUTFChars(jl, l); Manager::jenv->ReleaseStringUTFChars(cjl, cl); }

    Read the article

  • If we don't like it for the presentation layer, then why do we tolerate it for the behavior layer?

    - by greim
    Suppose CSS as we know it had never been invented, and the closest we could get was to do this: <script> // this is the page's stylesheet $(document).ready(function(){ $('.error').css({'color':'red'}); $('a[href]').css({'textDecoration':'none'}); ... }); </script> If this was how we were forced to write code, would we put up with it? Or would every developer on Earth scream at browser vendors until they standardized upon CSS, or at least some kind of declarative style language? Maybe CSS isn't perfect, but hopefully it's obvious how it's better than the find things, do stuff method shown above. So my question is this. We've seen and tasted of the glory of declarative binding with CSS, so why, when it comes to the behavioral/interactive layer, does the entire JavaScript community seem complacent about continuing to use the kludgy procedural method described above? Why for example is this considered by many to be the best possible way to do things: <script> $(document).ready(function(){ $('.widget').append("<a class='button' href='#'>...</div>"); $('a[href]').click(function(){...}); ... }); </script> Why isn't there a massive push to get XBL2.0 or .htc files or some kind of declarative behavior syntax implemented in a standard way across browsers? Is this recognized as a need by other web development professionals? Is there anything on the horizon for HTML5? (Caveats, disclaimers, etc: I realize that it's not a perfect world and that we're playing the hand we've been dealt. My point isn't to criticize the current way of doing things so much as to criticize the complacency that exists about the current way of doing things. Secondly, event delegation, especially at the root level, is a step closer to having a declarative behavior layer. It solves a subset of the problem, but it can't create UI elements, so the overall problem remains.)

    Read the article

  • How I can locate element depend on it value?

    - by Nikolay Kulakov
    I have some table: Title 1 | Title 2 | Title 3 | -------------------------------------------------- string value 1 | string value 2 | string value 3 | -------------------------------------------------- string value 4 | string value 5 | string value 6 | and some step: "And click on its < name", where < name can be anything from first column. So, how I can found needed element and click on its? Table code <div id="grid" class="k-grid k-widget" data-role="grid" tabindex="0" style=""> <div class="k-grid-header" style="padding-right: 17px;"> <div class="k-grid-header-wrap"> <table cellspacing="0"> <colgroup> <col style="width: 30px;"> <col style="width: 300px;"> <col> <col style="width: 200px;"> </colgroup> <thead> <tr> <th class="k-header" data-title="ID" data-field="id">ID</th> <th class="k-header" data-title="????????????" data-field="name" data-role="sortable" data-dir="asc"> <a class="k-link" href="#"> 2nd part ???????????? <span class="k-icon k-arrow-up"></span> </a> </th> <th class="k-header" data-title="????????" data-field="description">????????</th> <th class="k-header" data-field="undefined" data-role="sortable"> </tr> </thead> </table> </div> </div>

    Read the article

  • AJAX, same-origin Policy and working XML Requests

    - by Joern
    Hello guys, so, currently I develop Widgets for Smartphones and am going a bit more advanced into fields of data exchange between client and server applications. My problem is: For my current project I want my client file to request data from a PHP script with the help of AJAX XmlHttpRequest and the POST method: function xmlRequestNotes() { var parameter = 'p=1234'; xmlhttp = new XMLHttpRequest(); xmlhttp.open("POST", url, true); // Http Header xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", parameter.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { json = JSON.parse(xmlhttp.responseText); // Doing Stuff with the Response } }; xmlhttp.send(parameter); } This works perfectly fine on my local server set up in XAMPP and the local Widget emulator. But if it gets onto the device (also with access to the target network) I receive the 101 Network Error. And as far as I have read, this is due to the "Same-Origin Policy" of XmlHttpRequests? My problem is to really understand that. Although the idea of this policy is clear to me, I'm a bit confused by the fact that another XmlHttpRequest for a Yahoo Weather XML Feed works fine. Now, could anyone be so helpful to enlighten me? Here is the request that returns a city name from Yahoo's weather feed: function getCityName() { xmlhttp = new XMLHttpRequest(); xmlhttp.open("GET", "http://weather.yahooapis.com/forecastrss?w=645458&u=c", true); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { xmlhttp.responseXML; var yweather = "http://xml.weather.yahoo.com/ns/rss/1.0"; alert(xmlhttp.responseXML.getElementsByTagNameNS(yweather, "location")[0].getAttribute("city")); } }; xmlhttp.send(null); } Obvious differences are the POST and GET methods for once, but seeing that the Same-Origin Policy takes effect no matter what method, I can't really make much sense of it. Why does the latter request work but not the first? I would really appreciate some help here. Greetings and a merry Christmas to you guys!

    Read the article

  • Why my json_encode get corrupted

    - by Cullen SUN
    $model = new XUploadForm; $model->file = CUploadedFile::getInstance( $model, 'file' ); //We check that the file was successfully uploaded if( $model->file !== null ) { //Grab some data $model->mime_type = $model->file->getType( ); $model->size = $model->file->getSize( ); $model->name = $model->file->getName( ); $file_extention = $model->file->getExtensionName( ); //(optional) Generate a random name for our file $file_tem_name = md5(Yii::app( )->user->id.microtime( ).$model->name); $file_thumb_name = $file_tem_name.'_thumb.'.$file_extention; $file_image_name = $file_tem_name.".".$file_extention; if( $model->validate( ) ) { //Move our file to our temporary dir $model->file->saveAs( $path.$file_image_name ); if(chmod($path.$file_image_name, 0777 )){ // Yii::import("ext.EPhpThumb.EPhpThumb"); // $thumb_=new EPhpThumb(); // $thumb_->init(); // $thumb_->create($path.$file_image_name) // ->resize(110,80) // ->save($path.$file_thumb_name); } //here you can also generate the image versions you need //using something like PHPThumb //Now we need to save this path to the user's session if( Yii::app( )->user->hasState( 'images' ) ) { $userImages = Yii::app( )->user->getState( 'images' ); } else { $userImages = array(); } $userImages[] = array( "filename" => $file_image_name, 'size' => $model->size, 'mime' => $model->mime_type, "path" => $path.$file_image_name, // "thumb" => $path.$file_thumb_name, ); Yii::app( )->user->setState('images', $userImages); //Now we need to tell our widget that the upload was succesfull //We do so, using the json structure defined in // https://github.com/blueimp/jQuery-File-Upload/wiki/Setup echo json_encode( array( array( "type" => $model->mime_type, "size" => $model->size, "url" => $publicPath.$file_image_name, //"thumbnail_url" => $publicPath.$file_thumb_name, //"thumbnail_url" => $publicPath."thumbs/$filename", "delete_url" => $this->createUrl( "upload", array( "_method" => "delete", "file" => $file_image_name ) ), "delete_type" => "POST" ) ) ); Above code give me correct response, [{"type":"image/jpeg","size":2266,"url":"/uploads/tmp/0b00cbaee07c6410241428c74aae1dca.jpeg","delete_url":"/api/imageUpload/upload?_method=delete&file=0b00cbaee07c6410241428c74aae1dca.jpeg","delete_type":"POST"}] but if I uncomment the following // Yii::import("ext.EPhpThumb.EPhpThumb"); // $thumb_=new EPhpThumb(); // $thumb_->init(); // $thumb_->create($path.$file_image_name) // ->resize(110,80) // ->save($path.$file_thumb_name); it gave me corrupted response: Mac OS X 2??ATTR?dA??Y?Ycom.apple.quarantine0001;50655994;Google\x20Chrome.app;2599ECF9-69C5-4386-B3D9-9F5CC7E0EE1D|com.google.ChromeThis resource fork intentionally left blank ??[{"type":"image/jpeg","size":1941,"url":"/uploads/tmp/409c5921c6d20944e1a81f32b12fc380.jpeg","delete_url":"/api/imageUpload/upload?_method=delete&file=409c5921c6d20944e1a81f32b12fc380.jpeg","delete_type":"POST"}]

    Read the article

  • jquery buttons icons for dialog

    - by khinester
    I have this code: $(function() { var mainButtons = [ {text: "Invite" , 'class': 'invite-button' , click: function() { // get list of members alert('Invite was clicked...'); } } // end Invite button , {text: "Options" , 'class': 'options-button' , click: function() { alert('Options...'); } } // end Options button ] // end mainButtons , commentButtons = [ {text: "Clear" , 'class': 'clear-button' , click: function() { $('#comment').val('').focus().select(); } } // end Clear button , {text: "Post comment" , 'class': 'post-comment-button' , click: function() { alert('send comment...'); } } // end Post comment ] // end commentButtons $( "#form" ).dialog({ autoOpen: false , height: 465 , width: 700 , modal: true , position: ['center', 35] , buttons: mainButtons }); $( "#user-form" ) .button() .click(function() { $(this).effect("transfer",{ to: $("#form") }, 1500); $( "#form" ).dialog( "open" ); $( ".invite-button" ).button({ icons: {primary:'ui-icon-person',secondary:'ui-icon-triangle-1-s'} }); $( ".options-button" ).button({ icons: {primary:'ui-icon-gear'} }); }); // Add comment... $("#comment, .comment").click(function(){ $('#comment').focus().select(); $("#form").dialog({buttons: commentButtons}); $( ".post-comment-button" ).button({ icons: {primary:'ui-icon-comment'} }); $( ".clear-button" ).button({ icons: {primary:'ui-icon-refresh'} }); }); //Add comment // Bind back the Invite, Options buttons $(".files, .email, .event, .map").click(function(){ $("#form").dialog({buttons: mainButtons}); $( ".invite-button" ).button({ icons: {primary:'ui-icon-person',secondary:'ui-icon-triangle-1-s'} }); $( ".options-button" ).button({ icons: {primary:'ui-icon-gear'} }); }); // Tabs $( "#tabs" ).tabs(); $( ".tabs-bottom .ui-tabs-nav, .ui-tabs-nav > *" ) .removeClass( "ui-widget-header" ) .addClass( "ui-corner-bottom" ); }); ? What is the right way to add the button icons? As in my code I had to add it two times, once: $( "#user-form" ) .button() .click(function() { $(this).effect("transfer",{ to: $("#form") }, 1500); $( "#form" ).dialog( "open" ); ... and $(".files, .email, .event, .map").click(function(){ ... Could this code be improved further? I don't seem to be able to get the "transfer" effect to work correctly in a modal. I added: , close: function() { $(this).effect("transfer",{ to: $("#user-form") }, 1500); } to the $( "#form" ).dialog({ How would you go about in getting the "transfer" to work nicely when you open and close the dialog box?

    Read the article

  • Dynamic positioning inside relative div

    - by ian
    I'm trying to get a color picker javascript widget working in a page with a bunch of "stuff" in it that I can't change. Some of the "stuff" is causing the color picker to appear well below the link when clicked. I've reduced it to a simple example below. <html> <head> <script type="text/javascript"> function setPos(aname,dname) { var o=document.getElementById(aname); var ol=o.offsetLeft; while ((o=o.offsetParent) != null) { ol += o.offsetLeft; } o=document.getElementById(aname); var ot=o.offsetTop + 25; while((o=o.offsetParent) != null) { ot += o.offsetTop; } document.getElementById(dname).style.left = ol + "px"; document.getElementById(dname).style.top = ot + "px"; } </script> <style> h1 {height: 50px;} #divMain {position: relative;} </style> </head> <body> <h1></h1> <div id="divMain"> <a href="#" onClick="setPos('link1','div1');return false;" name="link1" id="link1">link 1</a> <div id="div1" style="position:absolute;border-style:solid;left:200px;top:200px;">div 1</div> </div> </body> </html> What's supposed to happen is when you click "link 1", "div1" should move directly below "link 1". What actually happens is that "div 1" appears well below "link 1". If you remove position: relative; from the CSS definition for divMain, "div 1" is positioned correctly. How can I position "div 1" directly beneath "link 1" without removing position: relative;?

    Read the article

  • How to store result of drag and drop as a image

    - by Jimmy
    I want to take the screenshot of the result of drag and drop, but I don't know how to do. Actually, I found 2 javascript and using HTML5 such as html2canvas and canvas2image. I am now combining them together, but it's still meet some problem with the canvas2image. Please help me solve this problem if you have same experience, thank you a lot. Please help me, I've been stock here for days. Drag and drop code. <script> $(function() { $( "#draggable" ).draggable(); $( "#draggable2" ).draggable(); $( "#droppable" ).droppable({ hoverClass: "ui-state-active", drop: function( event, ui ) { $( this ) .addClass( "ui-state-highlight" ) .find( "p" ) .html( "Dropped!" ); } }); }); </script> Image generation code <script> window.onload = function() { function convertCanvas(strType) { if (strType == "JPEG") var oImg = Canvas2Image.saveAsJPEG(oCanvas, true); if (!oImg) { alert("Sorry, this browser is not capable of saving " + strType + " files!"); return false; } oImg.id = "canvasimage"; oImg.style.border = oCanvas.style.border; oCanvas.parentNode.replaceChild(oImg, oCanvas); } function convertHtml(strType) { $('body').html2canvas(); var queue = html2canvas.Parse(); var canvas = html2canvas.Renderer(queue,{elements:{length:1}}); var img = canvas.toDataURL(); convertCanvas(strType); window.open(img); } document.getElementById("html2canvasbtn").onclick = function() { convertHtml("JPEG"); } } </script> HTML code <body> <h3>Picture:</h3> <div id="draggable"> <img src='http://1.gravatar.com/avatar/1ea64135b09e00ab80fa7596fafbd340? s=50&d=identicon&r=R'> </div> <div id="draggable2"> <img src='http://0.gravatar.com/avatar/2647a7d4b4a7052d66d524701432273b?s=50&d=identicon&r=G'> </div> <div id="dCanvas"> <canvas id="droppable" width="500" height="500" style="border: 2px solid gray" class="ui-widget-header" /> </div> <input type="button" id="bGenImage" value="Generate Image" /> <div id="dOutput"></div> </body>

    Read the article

  • Can I avoid a threaded UDP socket in Python dropping data?

    - by 666craig
    First off, I'm new to Python and learning on the job, so be gentle! I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads. My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there. Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-) import socket import threading import Queue import numpy import gtk gtk.gdk.threads_init() import gtk.glade import pygtk class readFromUDPSocket(threading.Thread): def __init__(self, socketUDP, readDataQueue, packetSize, numScans): threading.Thread.__init__(self) self.socketUDP = socketUDP self.readDataQueue = readDataQueue self.packetSize = packetSize self.numScans = numScans def run(self): for scan in range(1, self.numScans + 1): buffer = self.socketUDP.recv(self.packetSize) self.readDataQueue.put(buffer) self.socketUDP.close() print 'myServer finished!' class displayWithGTK(threading.Thread): def __init__(self, displayDataQueue, image, viewArea): threading.Thread.__init__(self) self.displayDataQueue = displayDataQueue self.image = image self.viewWidth = viewArea[0] self.viewHeight = viewArea[1] self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16) def run(self): scan = 0 try: while True: if not scan % self.viewWidth: scan = 0 buffer = self.displayDataQueue.get(timeout=0.1) self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16) gtk.gdk.threads_enter() self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB, False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3) self.image.set_from_pixbuf(self.myPixbuf) self.image.show() gtk.gdk.threads_leave() scan += 1 except Queue.Empty: print 'myDisplay finished!' pass def quitGUI(obj): print 'Currently active threads: %s' % threading.enumerate() gtk.main_quit() if __name__ == '__main__': # Create socket (IPv4 protocol, datagram (UDP)) and bind to address socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) host = '192.168.1.5' port = 1024 socketUDP.bind((host, port)) # Data parameters samplesPerScan = 256 packetsPerSecond = 1200 packetSize = 512 duration = 1 # For now, set a fixed duration to log data numScans = int(packetsPerSecond * duration) # Create array to store data data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16) # Create queue for displaying from readDataQueue = Queue.Queue(numScans) # Build GUI from Glade XML file builder = gtk.Builder() builder.add_from_file('GroundVue.glade') window = builder.get_object('mainwindow') window.connect('destroy', quitGUI) view = builder.get_object('viewport') image = gtk.Image() view.add(image) viewArea = (1200, samplesPerScan) # Instantiate & start threads myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans) myDisplay = displayWithGTK(readDataQueue, image, viewArea) myServer.start() myDisplay.start() gtk.gdk.threads_enter() gtk.main() gtk.gdk.threads_leave() print 'gtk.main finished!'

    Read the article

  • jQuery UI dialog + WebKit + HTML response with script

    - by Anthony Koval'
    Once again I am faced with a great problem! :) So, here is the stuff: on the client side, I have a link. By clicking on it, jQuery makes a request to the server, gets response as HTML content, then popups UI dialog with that content. Here is the code of the request-function: function preview(){ $.ajax({ url: "/api/builder/", type: "post", //dataType: "html", data: {"script_tpl": $("#widget_code").text(), "widgets": $.toJSON(mwidgets), "widx": "0"}, success: function(data){ //console.log(data) $("#previewArea").dialog({ bgiframe: true, autoOpen: false, height: 600, width: 600, modal: true, buttons: { "Cancel": function() { $(this).dialog('destroy'); } } }); //console.log(data.toString()); $('#previewArea').attr("innerHTML", data.toString()); $("#previewArea").dialog("open"); }, error: function(){ console.log("shit happens"); } }) } The response (data) is: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <script type="text/javascript">var smakly_widget_sid = 0 ,widgets = [{"cols": "2","rows": "2","div_id": "smakly_widget","wid": "0","smakly_style": "small_image",}, ] </script> <script type="text/javascript" src="/media/js/smak/smakme.js"></script> </head> <body> preview <div id="smakly_widget" style="width:560px;height:550px"> </div> </body> </html> As you see, there is a script to load: smakme.js, somehow it doesn't execute in WebKit-based browsers (I tried in Safari and Chrome), but in Firefox, Internet Explorer and Opera it works as expected! Here is that script: String.prototype.format = function(){ var pattern = /\{\d+\}/g; var args = arguments; return this.replace(pattern, function(capture){ return args[capture.match(/\d+/)]; }); } var turl = "/widget" var widgetCtrl = new(function(){ this.render_widget = function (w, content){ $("#" + w.div_id).append(content); } this.build_widgets = function(){ for (var widx in widgets){ var w = widgets[widx], iurl = '{0}?sid={1}&wid={2}&w={3}&h={4}&referer=http://ya.ru&thrash={5}'.format( turl, smakly_widget_sid, w.wid, w.cols, w.rows, Math.floor(Math.random()*1000).toString()), content = $('<iframe src="{0}" width="100%" height="100%"></iframe>'.format(iurl)); this.render_widget(w, content); } } }) $(document).ready(function(){ widgetCtrl.build_widgets(); }) Is that some security issue, or anything else?

    Read the article

  • Javascript self contained sandbox events and client side stack

    - by amnon
    I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application . I've watched "scalable javascript application architecture" by Nicholas Zakas on yui theater (excellent video) and implemented much of the talk with good success but i have some questions : I found the lecture a little confusing in regards to the relationship between modules and sandboxes , on one had to my understanding modules should not be effected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core as they do access the core for hiding base libary) but each module in the application gets a new sandbox ? , shouldn't the sandbox limit events to the modoules using it ? or should events be published cross page ? e.g. : if i have two editable tables but i want to contain each one in a different sandbox and it's events effect only the modules inside that sandbox something like messabe box per table which is a different module/widget how can i do that with sandbox per module , ofcourse i can prefix the events with the moduleid but that creates coupling that i want to avoid ... and i don't want to package modules toghter as one module per combination as i already have 6-7 modules ? while i can hide the base library for small things like id selector etc.. i would still like to use the base library for module dependencies and resource loading and use something like yui loader or dojo.require so in fact i'm hiding the base library but the modules themself are defined and loaded by the base library ... seems a little strange to me libraries don't return simple js objects but usualy wrap them e.g. : u can do something like $$('.classname').each(.. which cleans the code alot , it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implemnting that functionality is very bug prone does anyonen have any experience with building a front side stack of this order ? how easy is it to change a base library and/or have modules from different libraries , using yui datatable but doing form validation with dojo ... ? some what of a combination of 2+4 if u choose to do something like i said and load dojo form validation widgets for inputs via yui loader would that mean dojocore is a module and the form module is dependant on it ? Thanks .

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • Can I mix declarative and programmatic layout in GWT 2.0?

    - by stuff22
    I'm trying to redo an existing panel that I made before GWT 2.0 was released. The panel has a few text fields and a scrollable panel below in a VerticalPanel. What I'd like to do is to make the scrollable panel with UIBinder and then add that to a VerticalPanel Below is an example I created to illustrate this: public class ScrollTablePanel extends ResizeComposite{ interface Binder extends UiBinder<Widget, ScrollTablePanel > { } private static Binder uiBinder = GWT.create(Binder.class); @UiField FlexTable table1; @UiField FlexTable table2; public Test2() { initWidget(uiBinder.createAndBindUi(this)); table1.setText(0, 0, "testing 1"); table1.setText(0, 1, "testing 2"); table1.setText(0, 2, "testing 3"); table2.setText(0, 0, "testing 1"); table2.setText(0, 1, "testing 2"); table2.setText(0, 2, "testing 3"); table2.setText(1, 0, "testing 4"); table2.setText(1, 1, "testing 5"); table2.setText(1, 2, "testing 6"); } } then the xml: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g='urn:import:com.google.gwt.user.client.ui' xmlns:mail='urn:import:com.test.scrollpaneltest'> <g:DockLayoutPanel unit='EM'> <g:north size="2"> <g:FlexTable ui:field="table1"></g:FlexTable> </g:north> <g:center> <g:ScrollPanel> <g:FlexTable ui:field="table2"></g:FlexTable> </g:ScrollPanel> </g:center> </g:DockLayoutPanel> </ui:UiBinder> Then do something like this in the EntryPoint: public void onModuleLoad() { VerticalPanel vp = new VerticalPanel(); vp.add(new ScrollTablePanel()); vp.add(new Label("dummy label text")); vp.setWidth("100%"); RootLayoutPanel.get().add(vp); } But when I add the ScrollTablePanel to the VerticalPanel, only the first FlexTable (test1) is visible on the page, not the whole ScrollTablePanel. Is there a way to make this work where it is possible to mix declarative and programmatic layout in GWT 2.0?

    Read the article

  • Jquery draggable + toggleClass problem..

    - by vrynxzent
    the problem here is that the toggleClass position top:0px; left:0px will not trigger.. only the width and height and background-color will activate.. it will work if i will not drag the div(draggable).. if i start to drag the element, the toggled class positioning will not effect.. i dont know if there's such a function in jquery to help this.. <html> <head> <title></title> <script type="text/javascript" src="jquery-1.4.2.js"></script> <script type="text/javascript" src="jquery.ui.core.js"></script> <script type="text/javascript" src="jquery.ui.widget.js"></script> <script type="text/javascript" src="jquery.ui.mouse.js"></script> <script type="text/javascript" src="jquery.ui.resizable.js"></script> <script type="text/javascript" src="jquery.ui.draggable.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#x").draggable().dblclick(function(){ $(this).toggleClass("hi"); }); }); </script> <style> .hello { background:red; width:200px; height:200px; position:relative; top:100px; left:100px; } .hi { background:yellow; position:relative; width:300px; height:300px; top:0px; left:0px; } </style> </head> <body> <div id="x" class="hello"> </div> </body> </html>

    Read the article

  • wxWidgets: Show a window that was marked hidden in the XRC

    - by jdwieber
    I'm new to wxWidgets and DialogBlocks. I have a form that is created using DialogBlocks and saved as an XRC file. Part of the form has a vertical wxStaticBoxSizer into which is placed two wxScrolledWindow elements. I want to only show one at a time based on what data is to be shown to the user, so I have one marked hidden and left the other one visible. In code (C++), when I try to switch the display and show the widget that was hidden in the XRC and hide the one that was not, the one that I hide goes away fine, but the one that I want to show is not visible. If I resize the window however, it appears. Once it has appeard then I can switch back and forth with no issues. I tried many combinations of showing, enabling, invalidating, getting the sizer and calling RecalcSizes, refresh, layout, and some others. I tried them in different combinations too. Simply calling Show will allow me to toggle between the two, but only after I switch to the one that does not show initially and resize the window. From what I see in the docs. the issue is that wxSizer doesn't allocate space for hidden windows, but there is a flag that can be set to override that behavor. My problem is that DialogBlocks does not expose that feature, so if I manually edit the XRC file the modifation will be lost when I, or one of the other devs. saves some changes. Is ther a sequence of calls that I can make to tell the sizer to allocate space? The default OnResize handler does something to cause the sizer to allocate space, but I don't know what that is, or how to do it. This is the flag I found in the docs: wxRESERVE_SPACE_EVEN_IF_HIDDEN Normally wxSizers don't allocate space for hidden windows or other items. This flag overrides this behavior so that sufficient space is allocated for the window even if it isn't visible. This makes it possible to dynamically show and hide controls without resizing parent dialog, for example. This function is new since wxWidgets version 2.8.8 Thanks in advance.

    Read the article

  • Writing OpenGL enabled GUI

    - by Jaen
    I am exploring a possibility to write a kind of a notebook analogue that would reproduce the look and feel of using a traditional notebook, but with the added benefit of customizing the page in ways you can't do on paper - ask the program to lay ruled paper here, grid paper there, paste an image, insert a recording from the built-in camera, try to do handwriting recognition on the tablet input, insert some latex for neat formulas and so on. I'm pretty interested in developing it just to see if writing notes on computer can come anywhere close to the comfort plain paper + pencil offer (hard to do IMO) and can always turn it in as a university C++ project, so double gain there. Coming from the type of project there are certain requirements for the user interface: the user will be able to zoom, move and rotate the notebook as he wishes and I think it's pretty sensible delegate it to OpenGL, so the prospective GUI needs to work well with OGL (preferably being rendered in it) the interface should be navigable with as little of keyboard input as user wishes (incorporating some sort of gestures maybe) up to limiting the keyboard keys as modifiers to the pen movements and taps; this includes tablet and possible multitouch support the interface should keep out of the way where not needed and come up where needed and be easily layerable the notebook sheet itself will be a container for objects representing the notebook blurbs, so it would be nice if the GUI would be able to overlay some frames over the exact parts of the OpenGL-drawn sheet to signify what can be done with given part (like moving, rotating, deleting, copying, editing etc.) and it's extents In terms of interface it's probably going to end up similar to Alias' Sketch Book Pro: [picture][http://1.bp.blogspot.com/_GGxlzvZW-CY/SeKYA_oBdSI/AAAAAAAAErE/J6A0kyXiuqA/s400/Autodesk_Alias_SketchBook_Pro_2.jpg] As far as toolkits go I'm considering Qt and nui, but I'm not really aware how well would they match up the requirements and how well would they handle such an application. As far as I know you can somehow coerce Qt into doing widget drawing with OpenGL, but on the other hand I heard voices it's slot-signal framework isn't exactly optimal and requires it's own preprocessor and I don't know how hard would be to do all the custom widgets I would need (say color-wheel, ruler, blurb frames, blurb selection, tablet-targeted pop-up menu etc.) in the constraints of Qt. Also quite a few Qt programs I've had on my machine seemed really sluggish, but it may be attributed to me having old PC or programmers using Qt suboptimally rather to the framework itself. As for [nui][http://www.libnui.net/] I know it's also cross-platform and all of the basic things you would require of a GUI toolkit and what is the biggest plus it is OpenGL-enabled from the start, but I don't know how it is with custom widgets and other facets and it certainly has smaller userbase and less elaborate documentation than Qt. The question goes as this: Does any of these toolkits fulfill (preferably all of) the requirements or there is a well fitting toolkit I haven't come across or maybe I should just roll up my sleeves, get SFML (or maybe Clutter would be more suited to this?) and something like FastDelegates or libsigc++ and program the GUI framework from the ground up myself? I would be very glad if anyone had experience with a similar GUI project and can offer some comments on how well these toolkits hold up or is it worthwhile to pursue own GUI toolkit in this case. Sorry for longwindedness, duh.

    Read the article

  • Can I avoid a threaded UDP socket in Pyton dropping data?

    - by 666craig
    First off, I'm new to Python and learning on the job, so be gentle! I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads. My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there. Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-) import socket import threading import Queue import numpy import gtk gtk.gdk.threads_init() import gtk.glade import pygtk class readFromUDPSocket(threading.Thread): def __init__(self, socketUDP, readDataQueue, packetSize, numScans): threading.Thread.__init__(self) self.socketUDP = socketUDP self.readDataQueue = readDataQueue self.packetSize = packetSize self.numScans = numScans def run(self): for scan in range(1, self.numScans + 1): buffer = self.socketUDP.recv(self.packetSize) self.readDataQueue.put(buffer) self.socketUDP.close() print 'myServer finished!' class displayWithGTK(threading.Thread): def __init__(self, displayDataQueue, image, viewArea): threading.Thread.__init__(self) self.displayDataQueue = displayDataQueue self.image = image self.viewWidth = viewArea[0] self.viewHeight = viewArea[1] self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16) def run(self): scan = 0 try: while True: if not scan % self.viewWidth: scan = 0 buffer = self.displayDataQueue.get(timeout=0.1) self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16) gtk.gdk.threads_enter() self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB, False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3) self.image.set_from_pixbuf(self.myPixbuf) self.image.show() gtk.gdk.threads_leave() scan += 1 except Queue.Empty: print 'myDisplay finished!' pass def quitGUI(obj): print 'Currently active threads: %s' % threading.enumerate() gtk.main_quit() if __name__ == '__main__': # Create socket (IPv4 protocol, datagram (UDP)) and bind to address socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) host = '192.168.1.5' port = 1024 socketUDP.bind((host, port)) # Data parameters samplesPerScan = 256 packetsPerSecond = 1200 packetSize = 512 duration = 1 # For now, set a fixed duration to log data numScans = int(packetsPerSecond * duration) # Create array to store data data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16) # Create queue for displaying from readDataQueue = Queue.Queue(numScans) # Build GUI from Glade XML file builder = gtk.Builder() builder.add_from_file('GroundVue.glade') window = builder.get_object('mainwindow') window.connect('destroy', quitGUI) view = builder.get_object('viewport') image = gtk.Image() view.add(image) viewArea = (1200, samplesPerScan) # Instantiate & start threads myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans) myDisplay = displayWithGTK(readDataQueue, image, viewArea) myServer.start() myDisplay.start() gtk.gdk.threads_enter() gtk.main() gtk.gdk.threads_leave() print 'gtk.main finished!'

    Read the article

  • Do email forms need to be santized before sending?

    - by levi
    I have a client that keeps getting reports from godaddy's "websiteprotection.com" stating how the website is insecure. Your website contains pages that do not properly sanitize visitor-provided input to make sure it contains no malicious content or scripts. Cross-site scripting vulnerabilities let malicious users execute arbitrary HTML or script code in another visitor's browser. Output: The request string used to detect this flaw was : /cross_site_scripting.?nasl.asp The output was : HTTP/1.1 404 Not Found\r Date: Wed, 21 Mar 2012 08:12:02 GMT\r Server: Apache\r X-Pingback:http://?CLIENTSWEBSITE.com/?xmlrpc.php\r Expires: Wed, 11 Jan 1984 05:00:00 GMT\r Cache-Control: no-cache, must-revalidate, max-age=0\r Pragma: no-cache\r Set-Cookie: PHPSESSID=?1jsnhuflvd59nb4trtquston50; path=/\r Last-Modified: Wed, 21 Mar 2012 08:12:02 GMT\r Keep-Alive: timeout=15, max=100\r Connection: Keep-Alive\r Transfer-Encoding: chunked\r Content-Type: text/html; charset=UTF-8\r \r <div id="contact-form" class="widget"><form action="http://?CLIENTSWEBSITE.c om/<script>cross_site_?scripting.nasl</script>.asp" id="contactForm" meth od="post"> It looks like it has an issue with the contact form. All the contact form does is posts an ajax request to the same page, and than a PHP script mails the data (no database stuff). Is there any a security issues here? Any ideas on how I can satisfy the security scanner? Here is the form and script: <form action="<?php echo $this->getCurrentUrl(); ?>" id="contactForm" method="post"> <input type="text" name="Name" id="Name" value="" class="txt requiredField name" /> //Some more text inputs <input type="hidden" name="sendadd" id="sendadd" value="<?php echo $emailadd ; ?>" /> <input type="hidden" name="submitted" id="submitted" value="true" /><input class="submit" type="submit" value="Send" /> </form> // Some initial JS validation, if that passes an ajax post is made to the script below //If the form is submitted if(isset($_POST['submitted'])) { //Check captcha if (isset($_POST["captchaPrefix"])) { $capt = new ReallySimpleCaptcha(); $correct = $capt->check( $_POST["captchaPrefix"], $_POST["Captcha"] ); if( ! $correct ) { echo false; die(); } else { $capt->remove( $_POST["captchaPrefix"] ); } } $dateon = $_POST["dateon"]; $ToEmail = $_POST["sendadd"]; $EmailSubject = 'Contact Form Submission from ' . get_bloginfo('title'); $mailheader = "From: ".$_POST["Email"]."\r\n"; $mailheader .= "Reply-To: ".$_POST["Email"]."\r\n"; $mailheader .= "Content-type: text/html; charset=iso-8859-1\r\n"; $MESSAGE_BODY = "Name: ".$_POST["Name"]."<br>"; $MESSAGE_BODY .= "Email Address: ".$_POST["Email"]."<br>"; $MESSAGE_BODY .= "Phone: ".$_POST["Phone"]."<br>"; if ($dateon == "on") {$MESSAGE_BODY .= "Date: ".$_POST["Date"]."<br>";} $MESSAGE_BODY .= "Message: ".$_POST["Comments"]."<br>"; mail($ToEmail, $EmailSubject, $MESSAGE_BODY, $mailheader) or die ("Failure"); echo true; die(); }

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • Beginner’s Guide to Flock, the Social Media Browser

    - by Asian Angel
    Are you wanting a browser that can work as a social hub from the first moment that you start it up? If you love the idea of a browser that is ready to go out of the box then join us as we look at Flock. During the Install Process When you are installing Flock there are two install windows that you should watch for. The first one lets you choose between the “Express Setup & Custom Setup”. We recommend the “Custom Setup”. Once you have selected the “Custom Setup” you can choose which of the following options will enabled. Notice the “anonymous usage statistics” option at the bottom…you can choose to leave this enabled or disable it based on your comfort level. The First Look When you start Flock up for the first time it will open with three tabs. All three are of interest…especially if this is your first time using Flock. With the first tab you can jump right into “logging in/activating” favorite social services within Flock. This page is set to display each time that you open Flock unless you deselect the option in the lower left corner. The second tab provides a very nice overview of Flock and its’ built-in social management power. The third and final page can be considered a “Personal Page”. You can make some changes to the content displayed for quick and easy access and/or monitoring “Twitter Search, Favorite Feeds, Favorite Media, Friend Activity, & Favorite Sites”. Use the “Widget Menu” in the upper left corner to select the “Personal Page Components” that you would like to use. In the upper right corner there is a built-in “Search Bar” and buttons for “Posting to Your Blog & Uploading Media”. To help personalize the “My World Page” just a bit more you can even change the text to your name or whatever best suits your needs. The Flock Toolbar The “Flock Toolbar” is full of social account management goodness. In order from left to right the buttons are: My World (Homepage), Open People Sidebar, Open Media Bar, Open Feeds Sidebar, Webmail, Open Favorites Sidebar, Open Accounts and Services Sidebar, Open Web Clipboard Sidebar, Open Blog Editor, & Open Photo Uploader. The buttons will be “highlighted” with a blue background to help indicate which area you are in. The first area will display a listing of people that you are watching/following at the services shown here. Clicking on the “Media Bar Button” will display the following “Media Slider Bar” above your “Tab Bar”. Notice that there is a built-in “Search Bar” on the right side. Any photos, etc. clicked on will be opened in the currently focused tab below the “Media Bar”. Here is a listing of the “Media Streams” available for viewing. By default Flock will come with a small selection of pre-subscribed RSS Feeds. You can easily unsubscribe, rearrange, add custom folders, or non-categorized feeds as desired. RSS Feeds subscribed to here can be viewed combined together as a single feed (clickable links) in the “My World Page”. or can be viewed individually in a new tab. Very nice! Next on the “Flock Toolbar is the “Webmail Button”. You can set up access to your favorite “Yahoo!, Gmail, & AOL Mail” accounts from here. The “Favorites Sidebar” combines your “Browser History & Bookmarks” into one convenient location. The “Accounts and Services Sidebar” gives you quick and easy access to get logged into your favorite social accounts. Clicking on any of the links will open that particular service’s login page in a new tab. Want to store items such as photos, links, and text to add into a blog post or tweet later on? Just drag and drop them into the “Web Clipboard Sidebar” for later access. Clicking on the “Blog Editor Button” will open up a separate blogging window to compose your posts in. If you have not logged into or set up an account yet in Flock you will see the following message window. The “Blogging Window”…nice, simple, and straightforward. If you are not already logged into your photo account(s) then you will see the following message window when you click on the “Photo Uploader Button”. Clicking “OK” will open the “Accounts and Services Sidebar” with compatible photo services highlighted in a light yellow color. Log in to your favorite service to start uploading all those great images. After Setting Up Here is what our browser looked like after setting up some of our favorite services. The Twitter feed is certainly looking nice and easy to read through… Some tweaking in the “RSS Feeds Sidebar” makes for a perfect reading experience. Keeping up with our e-mail is certainly easy to do too. A look back at the “Accounts and Services Sidebar” shows that all of our accounts are actively logged in (green dot on the right side). Going back to our “My World Page” you can see how nice everything looks for monitoring our “Friend Activity & Favorite Feeds”. Moving on to regular browsing everything is looking very good… Flock is a perfect choice for anyone wanting a browser and social hub all built into a single app. Conclusion Anyone who loves keeping up with their favorite social services while browsing will find using Flock to be a wonderful experience. You literally get the best of both worlds with this browser. Links Download Flock The Official Flock Extensions Homepage The Official Flock Toolbar Homepage Similar Articles Productive Geek Tips Add Color Coding to Windows 7 Media Center Program GuideAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow to use an ISO image on Ubuntu LinuxAdvertise on How-To GeekFixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • RIF PRD: Presentation syntax issues

    - by Charles Young
    Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: ·      Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. ·      Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. ·      Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE          ::= PNAME_LN                  | PNAME_NS PNAME_LN       ::= PNAME_NS PN_LOCAL PNAME_NS       ::= PN_PREFIX? ':' PN_LOCAL       ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS       ::= PN_CHARS_U                  | '-' | [0-9] | #x00B7                  | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U     ::= PN_CHARS_BASE                  | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6]                  | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x037F-#x1FFF]                  | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF]                  | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD]                  | [#x10000-#xEFFFF] PN_PREFIX      ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification.   In any case, RIF does not allow local names without a prefix. ·      Missing ‘External’ specifiers for built-in functions and predicates.  The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. ·      The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. ·      Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant.   All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” ·      The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar.   ·      The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’).   One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. ·      I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. ·      The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars.   I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive.   It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me.  If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document(    Prefix( ex1 <http://example.com/2009/prd2> )    (* ex1:CheckoutRuleset *)  Group rif:forwardChaining (     (* ex1:GoldRule *)    Group 10 (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"Silver"])        (Forall ?shoppingCart such that ?customer[ex1:shoppingCart->?shoppingCart]           (If Exists ?value (And(?shoppingCart[ex1:value->?value]                                  External(pred:numeric-greater-than-or-equal(?value 2000))))            Then Do(Modify(?customer[ex1:status->"Gold"])))))      (* ex1:DiscountRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Or( ?customer[ex1:status->"Silver"]                ?customer[ex1:status->"Gold"])         Then Do ((?s ?customer[ex1:shoppingCart-> ?s])                  (?v ?s[ex1:value->?v])                  Modify(?s [ex1:value->External(func:numeric-multiply (?v 0.95))]))))      (* ex1:NewCustomerAndWidgetRule *)    Group (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"New"] )        (If Exists ?shoppingCart ?item                   (And(?customer[ex1:shoppingCart->?shoppingCart]                        ?shoppingCart[ex1:containsItem->?item]                        ?item # ex1:Widget ) )         Then Do( (?s ?customer[ex1:shoppingCart->?s])                  (?val ?s[ex1:value->?val])                  (?voucher ?customer[ex1:voucher->?voucher])                  Retract(?customer[ex1:voucher->?voucher])                  Retract(?voucher)                  Modify(?s[ex1:value->External(func:numeric-multiply(?val 0.90))]))))      (* ex1:UnknownStatusRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Not(Exists ?status                       (And(?customer[ex1:status->?status]                            External(pred:list-contains(List("New" "Bronze" "Silver" "Gold") ?status)) )))         Then Do( Execute(act:print(External(func:concat("New customer: " ?customer))))                  Assert(?customer[ex1:status->"New"]))))  ) )   I hope that helps someone out there :-)

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >