Search Results

Search found 923 results on 37 pages for 'art'.

Page 14/37 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • The Need For Compelling and Optimized Content

    With the attention span of web surfers decreasing steadily, it pays to entice them within the first few seconds after they have landed at your website. You should learn the fine art of balancing keywords and readability so that your content attracts search engine spiders and human readers alike.

    Read the article

  • How to Make Your Website Appear First on Google

    There are simple ways to make your website come up on first page on Google. In the next few paragraphs we are going to discuss those ways to prove that this is not as hard as some people would want you to believe. In fact, you can do it by yourself if you can practice what is presented here over and over again while giving yourself time to master the art.

    Read the article

  • SEO For Business

    The business world is firmly embracing the art of SEO like a long lost, much loved relative. Of course SEO is another way of marketing a business. It is like traditional advertising but at the same time its totally different.

    Read the article

  • Popular Style Sheet Languages Of The Past And Present

    In the art of web designing and development, style sheet languages such as CSS (Cascading Style Sheets) have become a popular for many professionals. However, other CSS, a number of style sheet langu... [Author: Margarette Mcbride - Web Design and Development - May 17, 2010]

    Read the article

  • Filtered Router Interface

    - by jviotti
    I'm having some problems with a Scientific-Atlanta DPR2320R2. In specific with the WIFI. A few months ago I changed its password and username and now I can't remember. So I tried cracking it with Hydra but it drove things worse. Content of webadmin was rendered partial, and threw lot of errors. I then reseted the router. I found myself abled to browse the web with ethernet-connected pc. Wifi is configured by registering the device's MAC Address, and indeed the router has been reseted and register MAC address were lost. No device could connect to wifi. In fact, the device does not even recognize the network. I tried the pointing to 192.168.0.1 to restablish the MAC's. But I couldn't connect to the router access point. Tried listing up hosts: $ nmap -sP 192.168.0.0/24 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:18 ART Host 192.168.0.1 is up (0.0018s latency). Host 192.168.0.11 is up (0.00025s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 59.62 seconds Then checked 192.168.0.1 was really up by sending pings. It responded to all my pings. I quick-scanned the access point: $ nmap 192.168.0.1 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:08 ART Interesting ports on 192.168.0.1: Not shown: 999 closed ports PORT STATE SERVICE 80/tcp filtered http Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds Look the state of the port 80: FILTERED. I'm pretty confused now. Any suggestion would be appreciated. Thanks in advance.

    Read the article

  • What to do with old laptop screens?

    - by Lord Torgamus
    This question is inspired by another SU question I came across earlier today: What to do with old hard drives? It made me think about two long-dead laptops I have with perfectly good screens still inside. One is a Dell Inspiron 5100 and the other is an Averatec E1200, but responses need not be geared towards those particular models' screens. Rules, based heavily on the original question's: Objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, be plain ol' fun, serve a social purpose or benefit the community. Your answer need not be limited to only one screen. For a really good answer, I'll go out and buy additional leftover screens. Your answer need not be limited to one project per screen. If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a useful purpose; art is nice, but functional art is way better. Thanks in advance, folks. EDIT: Found another related question. Fun projects to do with an old 17" LCD monitor EDIT 2: I, for one, am enjoying the new outpouring of creativity here. Best fifty bucks... I mean, rep points... I ever spent. EDIT 3: That does it. At the end of the week, there was a tie for most votes between the accepted answer and the game platform answer. The game platform answer was cooler, but less reasonable as a project to actually do; in other words, it was more moon rocky. Unfortunately, I think fencepost had the best comment on the topic, which is that displays on their own have no good interface. Thanks for playing, everyone!

    Read the article

  • is this correct use of jquery's document.ready?

    - by Haroldo
    The below file contains all the javascript for a page. Performance is the highest priority. Is this the most efficient way? Do all click/hover events need to to be inside the doc.ready? //DOCUMENT.READY EVENTS //--------------------------------------------------------------------------- $(function(){ // mark events as not loaded $('.event').data({ t1_loaded: false, t2_loaded: false, t3_loaded: false, art_req: false }); //mark no events have been clicked $('#wrap_right').data('first_click_made', false); // cal-block event click $('#cal_blocks div.event, #main_search div.event').live('click', function(){ var id = $(this).attr('id').split('e')[1]; event_click(id); }); // jq history $.historyInit(function(hash){ if(hash) { event_click(hash); } }); // search $('#search_input').typeWatch ({ callback: function(){ var q = $('#search_input').attr('value'); search(q); }, wait : 350, highlight : false, captureLength : 2 }); $('#search_input, #main_search div.close').live('click',function(){ $(this).attr("value",""); reset_srch_res(); }); $('#main_search').easydrag(); $('a.dialog').colorbox(); //TAB CLICK -> AJAX LOAD TAB $('#wrap_right .rs_tabs li').live('click', function(){ $this = $(this); var id = $('#wrap_right').data('curr_event'); var tab = parseInt($this.attr('rel')); //hide other tabs $('#rs_'+id+' .tab_body').hide(); //mark current(clicked) tab $('#rs_'+id+' .rs_tabs li').removeClass('curr_tab'); $this.addClass('curr_tab'); //is the tab already loaded and hidden? var loaded = $('#e'+id).data('t'+tab+'_loaded'); //console.log('id: '+id+', tab: '+tab+', loaded: '+loaded); if(loaded === true) { $('#rs_'+id+' .tab'+tab).show(); if (tab == 2) { art_requested(id); } } else { //ajax load in the tab $('#rs_'+id+' .tab'+tab).load('index_files/tab'+tab+'.php?id='+id, function(){ //after load callback if (tab == 1) { $('#rs_' + id + ' .frame').delay(600).fadeIn(600) }; if (tab == 2) { art_requested(id); } }); //mark tab as loaded $('#e'+id).data('t'+tab+'_loaded', true); //fade in current tab $('#rs_'+id+' .tab'+tab).show(); } }) }); // LOAD RS FUNCTIONS //--------------------------------------------------------------------------- function event_click(id){ window.location.hash = id; //mark current event $('#wrap_right').data('curr_event', id); //hide any other events if($('#wrap_right').data('first_click_made') === true) { $('#wrap_right .event_rs').hide(); } //frame loaded before? var loaded = $('#e'+id).data('t1_loaded'); if(loaded === true) { $('#rs_'+id).show(); } else { create_frame(id); } //open/load the first tab $('#rs_'+id+' .t1').click(); $('#wrap_right').data('first_click_made', true); $('#cal_blocks').scrollTo('#e'+id, 1000, {offset: {top:-220, left:0}}); } function create_frame(id){ var art = ents[id].art; var ven = ents[id].ven; var type = ents[id].gig_club; //select colours for tabs if(type == 1){ var label = 'gig';} else if(type == 2){ var label = 'club';} else if(type == 0){ var label = 'other';} //create rs container for this event var frame = '<div id="rs_'+id+'" class="event_rs">'; frame += '<div class="title_strip"></div>'; frame += '<div class="rs_tabs"><ul class="'+label+'"><li class="t1 nav_tab1 curr_tab hand" rel="1"></li>'; if(art == 1){frame += '<li class="t2 nav_tab2 hand" rel="2"></li>';} if(ven == 1){frame += '<li class="t3 nav_tab2 hand" rel="3"></li>';} frame += '</ul></div>'; frame += '<div id="rs_content"><div class="tab_body tab1" ></div>'; if(art == 1){frame += '<div class="tab_body tab2"></div>';} if(ven == 1){frame += '<div class="tab_body tab3"></div>';} frame += '</div>'; frame += '</div>'; $('#wrap_right').append(frame); //mark current event in cal-blocks $('#cal_blocks .event_sel').removeClass('event_sel'); $('#e'+id).addClass('event_sel'); if($('#wrap_right').data('first_click_made') === false) { $('#wrap_right').delay(500).slideDown(); $('#rs_'+id+' .rs_tabs').delay(800).fadeIn(); } }; // FUNCTIONS //--------------------------------------------------------------------------- //check to see if an artist has been requested function art_requested(id){ var art_req = $('#e'+id).data('art_req'); if(art_req !== false) { //alert(art_req); $('#art_'+art_req).click(); } } //scroll artist panes smoothly (scroll bars cause glitches otherwise) function before (){ if(!IE){$('#art_scrollable .bio_etc').css('overflow','-moz-scrollbars-none');} } function after (){ if(!IE){$('#art_scrollable .bio_etc').css('overflow','auto');} } function prep_media_carousel(){ //youtube and soundcloud player $("#rs_content .yt_scrollable a.yt, #rs_content .yt_scrollable a.sc").colorbox({ overlayClose : false, opacity : 0 }); $("#colorbox").easydrag(true); $('#cboxOverlay').remove(); } function make_carousel_scrollable(unique_id){ $('#scroll_'+unique_id).scrollable({ size:1, clickable:false, nextPage:'#r_'+unique_id, prevPage:'#l_'+unique_id }); } function check_l_r_arrows(total, counter, art_id){ //left arrow if(counter > 0) { $('#l_'+art_id).show(); $('#l_'+art_id+'_inactive').hide(); } else { $('#l_'+art_id).hide(); $('#l_'+art_id+'_inactive').show(); } //right arrow if(counter < total-3) { $('#r_'+art_id).show(); $('#r_'+art_id+'_inactive').hide(); } else { $('#r_'+art_id).hide(); $('#r_'+art_id+'_inactive').show(); } } function reset_srch_res(){ $('#main_search').fadeOut(400).children().remove(); } function search(q){ $.ajax({ type: 'GET', url: 'index_files/srch/search.php?q='+q, success: function(e) { $('#main_search').html(e).show(); } }); }

    Read the article

  • Would anyone tell me how to fetch the media:thumb element's attribute from a json feed?

    - by ash
    I made a yahoo pipe that pulls up the atoms as json format; however, I can fetch and display all the elements in my html page except for the element's attribute. Would anyone tell me how to fetch the media:thumb element's attribute from a json feed? I am pasting the html page's code with javascript. If you save the html page and then view it in browser, you will see that all the necessary elements get output at html page except for the media:thumb as I cannot display the attribute of media:thumb when the feed is formatted as json. I am also pasting the some portion of the json feed so that you can have an idea what i am talking about. Please tell me how to retrieve attribute from media:thumb element of a json feed by using plain javascript but no server side code or javascript library. Thank you. function getFeed(feed){ var newScript = document.createElement('script'); newScript.type = 'text/javascript'; newScript.src = 'http://pipes.yahoo.com/pipes/pipe.run?_id=40616620df99780bceb3fe923cecd216&_render=json&_callback=piper'; document.getElementsByTagName("head")[0].appendChild(newScript); } function piper(feed){ var tmp=''; for (var i=0; i'; tmp+=feed.value.items[i].title+''; tmp+=feed.value.items[i].author.name+''; tmp+=feed.value.items[i].published+''; if (feed.value.items[i].description) { tmp+=feed.value.items[i].description+''; } tmp+='<hr>'; } document.getElementById('rssLayer').innerHTML=tmp; } </script> bchnbc .............................................................. Some portion of the json feed that gets generated by yahoo pipe .............................................................. piper({"count":2,"value":{"title":"myPipe","description":"Pipes Output","link":"http:\/\/pipes.yahoo.com\/pipes\/pipe.info?_id=f7f4175d493cf1171aecbd3268fea5ee","pubDate":"Fri, 02 Apr 2010 17:59:22 -0700","generator":"http:\/\/pipes.yahoo.com\/pipes\/","callback":"piper", "items": [{ "rights":"Attribution - Noncommercial - No Derivative Works", "link":"http:\/\/vodo.net\/mixtape1", "y:id":{"value":null,"permalink":"true"}, "content":{"content":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.","type":"text"}, "author": {"name":"Various"}, "description":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.", "media:thumbnail": { "url":"http:\/\/vodo.net\/\/thumbnails\/Mixtape1.jpg" }, "published":"2010-03-08-09:20:20 PM", "format": { "audio_bitrate":null, "width":"608", "xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat", "channels":"2", "samplerate":"44100.0", "duration":"3092.36", "height":"352", "size":"733925376.0", "framerate":"25.0", "audio_codec":"mp3", "video_bitrate":"1898.0", "video_codec":"XVID", "pixel_aspect_ratio":"16:9" }, "y:title":"Mixtape #1: VODO's favourite short films", "title":"Mixtape #1: VODO's favourite short films", "id":null, "pubDate":"2010-03-08-09:20:20 PM", "y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }}, {"rights":"Attribution - Noncommercial - No Derivative Works","link":"http:\/\/vodo.net\/gilbert","y:id":{"value":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","permalink":"true"},"content":{"content":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","type":"text"},"author":{"name":"Nathaniel Hansen"},"description":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","media:thumbnail":{"url":"http:\/\/vodo.net\/\/thumbnails\/gilbert.jpeg"},"published":"2010-03-03-10:37:05 AM","format":{"audio_bitrate":null,"width":"624","xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat","channels":"2","samplerate":null,"duration":"373.673","height":"352","size":"123321266.0","framerate":null,"audio_codec":"mp3","video_bitrate":null,"video_codec":"XVID","pixel_aspect_ratio":"16:9"},"y:title":"Gilbert","title":"Gilbert","id":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","pubDate":"2010-03-03-10:37:05 AM","y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }} ] }})

    Read the article

  • CodePlex Daily Summary for Monday, November 04, 2013

    CodePlex Daily Summary for Monday, November 04, 2013Popular ReleasesDNN Blog: 06.00.01: 06.00.01 ReleaseThis is the first bugfix release of the new v6 blog module. These are the changes: Added some robustness in v5-v6 scripts to cater for some rare upgrade scenarios Changed the name of the module definition to avoid clash with Evoq Social Addition of sitemap providerStock Track: Version 1.2 Stable: Overhaul and re-think of the user interface in normal mode. Added stock history view in normal mode. Allows user to enter orders in normal mode. Allow advanced user to run database queries within the program. Improved sales statistics feature, able to calculate against a single category.VG-Ripper & PG-Ripper: VG-Ripper 2.9.50: changes NEW: Added Support for "ImageHostHQ.com" links NEW: Added Support for "ImgMoney.net" links NEW: Added Support for "ImgSavy.com" links NEW: Added Support for "PixTreat.com" links Bug fixesVidCoder: 1.5.11 Beta: Added Encode Details window. Exposes elapsed time, ETA, current and average FPS, running file size, current pass and pass progress. Open it by going to Windows -> Encode Details while an encode is running. Subtitle dialog now disables the "Burn In" checkbox when it's either unavailable or it's the only option. It also disables the "Forced Only" when the subtitle type doesn't support the "Forced" flag. Updated HandBrake core to SVN 5872. Fixed crash in the preview window when a source fil...Wsus Package Publisher: Release v1.3.1311.02: Add three new Actions in Custom Updates : Work with Files (Copy, Delete, Rename), Work with Folders (Add, Delete, Rename) and Work with Registry Keys (Add, Delete, Rename). Fix a bug, where after resigning an update, the display is not refresh. Modify the way WPP sort rows in 'Updates Detail Viewer' and 'Computer List Viewer' so that dates are correctly sorted. Add a Tab in the settings form to set Proxy settings when WPP needs to go on Internet. Fix a bug where 'Manage Catalogs Subsc...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New DataTypesImage Point XML DropDownList XPath Templatable List New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 14873 14833 14864 14855 / 14860 14816 14823 Drag & Drop support for rows Su...SharePoint 2013 Global Metadata Navigation: SP 2013 Metadata Navigation Sandbox Solution: SharePoint 2013 Global Metadata Navigation sandbox solution version 1.0. Upload to your site collection solution store and activate. See the Documentation tab for detailed instructions.SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...CodeGen Code Generator: CodeGen 4.3.2: Changes in this release include: Removed old tag tokens from several example templates. Fixed a bug which was causing the default author and company names not to be picked up from the registry under .NET. Added several additional tag loop expressions: <IF FIRST_TAG>, <IF LAST_TAG>, <IF MULTIPLE_TAGS> and<IF SINGLE_TAG>. Upgraded to Synergy/DE 10.1.1b, Visual Studio 2013 and Installshield Limited Edition 2013.Dynamics CRM 2013 Easy Solution Importer: Dynamics CRM 2013 Easy Solution Importer 1.0.0.0: First Version of Easy Solution Importer contains: - Entity to handle solutions - PBL to deactivate fields in form - Business Process Flow to launch the Solution Import - Plugin to import solutions - ChartSQL Power Doc: Version 1.0.2.2 BETA 1: Fixes for issues introduced with PowerShell 4.0 with serialization/deserialization. Fixed an issue with the max length of an Excel cell being exceeded by AD groups with a large number of members.Community Forums NNTP bridge: Community Forums NNTP Bridge V54 (LiveConnect): This is the first release which can be used with the new LiveConnect authentication. Fixes the problem that the authentication will not work after 1 hour. Also a logfile will now be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt".AutoAudit: AutoAudit 3.20f: Here is a high level list of the things I have changed between AutoAudit 2.00h and 3.20f ... Note: 3.20f corrects a minor bug found in 3.20e in the _RowHistory UDF when using column names with spaces. 1. AutoAudit has been tested on SQL Server 2005, 2008, 2008R2 and 2012. 2. Added the capability for AutoAudit to handle primary keys with up to 5 columns. 3. Added the capability for AutoAudit to save changes for a subset of the columns in a table. 4. Normalized the Audit table and created...Aricie - Friendlier Url Provider: Aricie - Friendlier Url Provider Version 2.5.3: This is mainly a maintenance release to stabilize the new Url Group paradigm. As usual, don't forget to install the Aricie - Shared extension first Highlights Fixed: UI bugs Min Requirements: .Net 3.5+ DotNetNuke 4.8.1+ Aricie - Shared 1.7.7+Aricie Shared: Aricie.Shared Version 1.7.7: This is mainly a maintenance version. Fixes in Property Editor: list import/export Min Requirements: DotNetNuke 4.8.1+ .Net 3.5+WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)SuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.NAudio: NAudio 1.7: full release notes available at http://mark-dot-net.blogspot.co.uk/2013/10/naudio-17-release-notes.htmlDirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...New ProjectsActive Directory User Home Directory and Home Drive Management: The script is great for migrations and overall user management. Questions please send an email to delagardecodeplex@hotmail.com.Computational Mathematics in TSQL: Combinatorial mathematics is easily expressed with computational assistance. The SQL Server engine is the canvas.Demo3: this is a demoEraDeiFessi: Un parser per un certo sito molto pesante e scomodo da navigareFinditbyme Local Search: Multi-lingual search engine that can index entities with multiple attributes.MVC Demo Project - 6: Demo project showing how to use partial views inside an ASP.NET MVC application.OLM to PST Converters: An Ideal Approach for Mac Users: OLM to PST converter helps you to access MAC Outlook emails with Windows platformOpen Source Grow Pack Ship: The Open Source Grow Pack Ship system is for the produce industry. It will allow companies with low funds and infrastructure to operate inside their own budget.Pauli: Pauli is a small .NET based password manager for home use. The Software helps a user to organize passwords, PIN codes, login accounts and notes.Refraction: Member element sorting. Contains reusable Visual Studio Extensibility code in the 'CodeManifold' project.Sensarium Cybernetic Art: Sensarium Cybernetic Art will use Kinect and brainwave technologies to create a true cybernetic art system. SharePoint - Tetris WebPart: Tetris webpart for SharePoint 2013Software Product Licensing: Dot License is very easy for your software product licensing. Product activation based on AES encryption, Processor ID and a single GUID key.VBS Class Framework: The 'VBS Class Framework' is an experimental project whcih aims at delivering 'Visual Basic Script Classes' for some commonly used Objects / Components (COM).VisualME7Logger: Graphical tools for logging real time performance statistics from VW and Audi automobilesVM Role Authoring Tool: VM ROLE Authoring Tool is used to author consistent VM Role gallery workloads for Windows Azure Pack and Windows Azure. wallet: Wallet Windows 8 Store Application: Windows 8 Store Application with XAML and C#Windows Azure Cache Extension Library: Windows Azure Cache Extension LibraryWindows Firewall Notifier: Windows Firewall Notifier (WFN) extends the default Windows embedded firewall behavior, allowing to visualize and handle incoming or outgoing connections.

    Read the article

  • What is "elegant" code?

    - by Breton
    I see a lot of lip service and talk about the most "elegant" way to do this or that. I think if you spend enough time programming you begin to obtain a sort of intuitive feel for what it is we call "elegance". But I'm curious. Even if we can look at a bit of code, and say instinctively "That's elegant", or "That's messy", I wonder if any of us really understands what that means. Is there a precise definition for this "elegance" we keep referring to? If there is, what is it? Now, what I mean by a precise definition, is a series of statements which can be used to derive questions about a peice of code, or a program as a whole, and determine objectively, or as objectively as possible, whether that code is "elegant" or not. May I assert, that perhaps no such definition exists, and it's all just personal preference. In this case, I ask you a slightly different question: Is there a better word for "elegance", or a better set of attributes to use for judging code quality that is perhaps more objective than merely appealing to individual intuition and taste? Perhaps code quality is a matter of taste, and the answer to both of my questions is "no". But I can't help but feel that we could be doing better than just expressing wishy washy feelings about our code quality. For example, user interface design is something that to a broad range of people looks for all the world like a field of study that oughtta be 100% subjective matter of taste. But this is shockingly and brutally not the case, and there are in fact many objective measures that can be applied to a user interface to determine its quality. A series of tests could be written to give a definitive and repeatable score to user interface quality. (See GOMS, for instance). Now, okay. is Elegance simply "code quality" or is it something more? Is it something that can be measured? Or is it a matter of taste? Does our profession have room for taste? Maybe I'm asking the wrong questions altogether. Help me out here. Bonus Round If there is such a thing as elegance in code, and that concept is useful, do you think that justifies classifying the field of programming as an "Art" capital A, or merely a "craft". Or is it just an engineering field populated by a bunch of wishful thinking humans? Consider this question in the light of your thoughts about the elegance question. Please note that there is a distinction between code which is considered "art" in itself, and code that was written merely in the service of creating an artful program. When I ask this question, I ask if the code itself justifies calling programming an art. Bounty Note I liked the answers to this question so much, I think I'd like to make a photographic essay book from it. Released as a free PDF, and published on some kind of on demand printing service of course, such as "zazz" or "tiggle" or "printley" or something . I'd like some more answers, please!

    Read the article

  • Silverlight Cream for March 28, 2010 -- #823

    - by Dave Campbell
    In this Issue: Michael Washington, Andy Beaulieu, Bill Reiss, jocelyn, Shawn Wildermuth, Cameron Albert, Shawn Oster, Alex Yakhnin, ondrejsv, Giorgetti Alessandro, Jeff Handley, SilverLaw, deepm, and Kyle McClellan. Shoutouts: If I've listed this before, it's worth another... Introduction to Prototyping with SketchFlow (twelve video series) and on the same page is Creating a Beehive Game with Behaviors in Blend 3 (ten video series) Shawn Oster announced his Slides + Code + Video from ‘An Introduction to Developing Applications for Microsoft Silverlight’ from MIX10 Tim Heuer announced earlier this week: Silverlight Client for Facebook updated for Silverlight 4 RC Nikhil Kothari announced the availability of his MIX10 Talk - Slides and Code András Velvárt backed up his great MIX09 effort with MIX10.Zoomery.com... everything in one DZ effort... thanks András! Andy Beaulieu posted his material for his Code Camp 13 in Waltham: Windows Phone: Silverlight for Casual Games From SilverlightCream.com: Silverlight MVVM - The Revolution Has Begun Michael Washington did an awesome tutorial on MVVM and Silverlight creating a simple Silverlight File Manager. The post has a link to the tutorial at CodeProject... great tutorial. Windows Phone 7 + Silverlight Performance Andy Beaulieu has a post up we should all bookmark... getting a handle on the graphics performance of our app on WP7. Great examples, and external links. Space Rocks game step 6: Keyboard handling Bill Reiss has a post up about keyboard input for the WP7 game he's building ... this is Episode 6 ... you're working along with him, right? Panoramic Navigation on Windows Phone 7 with No Code! jocelyn at InnovativeSingapore (I found this by way of Shawn's post), has a Panoramic Navigation template out there for WP7 for all of us to grab... great post about it too. My First WP7 Application Shawn Wildermuth has been playing with WP7 development and has his XBOX Game library app up on the emulator... all with source of course Silverlight and Windows Phone 7 Game Cameron Albert built a web-based game called 'Shape Attack' and also did it for WP7 to compare the performance... check it out for yourself, but hey, it's game source for the phone... cool :) Changing the Onscreen Keyboard layout in Silverlight for Windows Phone using InputScope Shawn Oster has a cool post on changing the keyboard on WP7 to go along with what you're expecting the user to type... how cool is that?? Deep Zoom on WP7 Check out the quick work Alex Yakhnin made of putting DeepZoom on WP7... all source included. How to: Create a sketchy Siverlight GroupBox in Blend/SketchFlow ondrejsv has the xaml up to take Tim Greenfield's GroupBox control and insert it into SketchFlow. Silverlight / Castle Windsor – implementing a simple logging framework Giorgetti Alessandro posted about CastleWindsor for Silverlight, and a logging system inherited from LevelFilteredLogger in the absence of Log4Net. DomainDataSource in a ViewModel Jeff Handley responds to a common forum post about using DomainDataSource in a ViewModel. Read his comments on AutoLoad and ElementName Bindins. Digital Jugendstil TextEffect (Art Nouveau) - Silverlight 3 SilverLaw has a cool TagCloud demo and a UserControl he calls Art Nouveau up at the Expression Gallery... not for a business app, I don't think :) Configuring your DomainService for a Windows Phone 7 application deepm discusses RIA Services for WP7 and how to enable a WP7 app to communicate with a DomainService. Writing a Custom Filter or Parameter for DomainDataSource Kyle McClellan by way of Jeff Handley's blog, is discussing how to leverage the custom parameter types you defined in the previous version of RIA Services. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Is a university education really worth it for a good programmer?

    - by Jon Purdy
    The title says it all, but here's the personal side of it: I've been doing design and programming for about as long as I can remember. If there's a programming problem, I can figure it out. (Though admittedly StackOverflow has allowed me to skip the figuring out and get straight to the doing in many instances.) I've made games, esoteric programming languages, and widgets and gizmos galore. I'm currently working on a general-purpose programming language. There's nothing I do better than programming. However, I'm just as passionate about design. Thus when I felt leaving high school that my design skills were lacking, I decided to attend university for New Media Design and Imaging, a digital design-related major. For a year, I diligently studied art and programmed in my free time. As the next year progressed, however, I was obligated to take fewer art and design classes and more technical classes. The trouble was of course that these classes were geared toward non-technical students, and were far beneath my skill level at the time. No amount of petitioning could overcome the institution's reluctance to allow me to test out of such classes, and the major offered no promise for any greater challenge in the future, so I took the extreme route: I switched into the technical equivalent of the major, New Media Interactive Development. A lot of my credits moved over into the new major, but many didn't. It would have been infeasible to switch to a more rigorous technical major such as Computer Science, and having tutored Computer Science students at every level here, I doubt I would be exposed to anything that I haven't already or won't eventually find out on my own, since I'm so involved in the field. I'm now on track to graduate perhaps a year later than I had planned, which puts a significant financial strain on my family and my future self. My schedule continues to be bogged down with classes that are wholly unnecessary for me to take. I'm being re-introduced to subjects that I've covered a thousand times over, simply because I've always been interested in it all. And though I succeed in avoiding the cynical and immature tactic of failing to complete work out of some undeserved sense of superiority, I'm becoming increasingly disillusioned by the lack of intellectual stimulation. Further, my school requires students to complete a number of quarters of co-op work experience proportional to their major. My original major required two quarters, but my current requires three, delaying my graduation even more. To top it all off, college is putting a severe strain on my relationship with my very close partner of a few years, so I've searched diligently for co-op jobs in my area, alas to no avail. I'm now in my third year, and approaching that point past which I can no longer handle this. Either I keep my head down, get a degree no matter what it takes, and try to get a job with a company that will pay me enough to do what I love that I can eventually pay off my loans; or I cut my losses now, move wherever there is work, and in six months start paying off what debt I've accumulated thus far. So the real question is: is a university education really more than just a formality? It's a big decision, and one I can't make lightly. I think this is the appropriate venue for this kind of question, and I hope it sticks around for the sake of others who might someday find themselves in similar situations. My heartfelt thanks for reading, and in advance for your help.

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • How can we leverage NSAppearance?

    - by Brad Allred
    I was reading the Cocoa documentation and stumbled across some new features in the 10.9 API. From the docs I gather that the NSAppearance class and a related protocol NSAppearanceCustomization Appear to be a means of customizing the appearance of NSView and its descendants. An NSAppearance object represents a file that specifies a standard or custom appearance that applies to a subset of UI elements in an app. An app can contain multiple appearance files and—because NSAppearance conforms to NSCoding—you can use Interface Builder to assign UI elements to an appearance. Typically, you customize a window by using Xcode to create an appearance file that contains the views you want to customize and the custom art that should be applied to them. Xcode transforms the file’s art content into a runtime format that AppKit can draw when the specified views are displayed. Well that all sounds neat and promising, but nowhere in the documentation can I find what an appearance file is or how to make one. Google searches are coming up empty other than for the thin documentation I have already read. I do see that UIKit has a similar sounding UIAppearance class, but from what I can tell this is not a straight port of the UIKit class. Does anybody know how to make this magic "appearance file" and what exactly we can do with it?

    Read the article

  • Need help regarding one LALR(1) parsing.

    - by AppleGrew
    I am trying to parse a context-free language, called Context Free Art. I have created its parser in Javascript using a YACC-like JS LALR(1) parser generator JSCC. Take the example of following CFA (Context Free Art) code. This code is a valid CFA. startshape A rule A { CIRCLE { s 1} } Notice the A and s in above. s is a command to scale the CIRCLE, but A is just a name of this rule. In the language's grammar I have set s as token SCALE and A comes under token STRING (I have a regular expression to match string and it is at the bottom of of all tokens). This works fine, but in the below case it breaks. startshape s rule s { CIRCLE { s 1} } This too is a perfectly valid code, but since my parser marks s after rule as SCALE token so it errors out saying that it was expecting STRING. Now my question is, if there is any way to re-write the production rules of the parser to account for this? The related production rule is:- rule: RULE STRING '{' buncha_replacements '}' [* rule(%2, 1) *] | RULE STRING RATIONAL '{' buncha_replacements '}' [* rule(%2, 1*%3) *] ; One simple solution I can think of is create a copy of above rule with STRING replaced by SCALE, but this is just one of the many similar rules which would need such fixing. Furthermore there are many other terminals which can get matched to STRING. So that means way too many rules!

    Read the article

  • How would the conversion of a custom CMS using a text-file-based database to Drupal be tackled?

    - by James Morris
    Just today I've started using Drupal for a site I'm designing/developing. For my own site http://jwm-art.net I wrote a user-unfriendly CMS in PHP. My brief experience with Drupal is making me want to convert from the CMS I wrote. A CMS whose sole method (other than comments) of automatically publishing content is by logging in via SSH and using NANO to create a plain text file in a format like so*: head<<END_HEAD title = Audio keywords= open,source,audio,sequencing,sampling,synthesis descr = Music, noise, and audio, created by James W. Morris. parent = home END_HEAD main<<END_MAIN text<<END_TEXT Digital music, noise, and audio made exclusively with @=xlink=http://www.linux-sound.org@:Linux Audio Software@_=@. END_TEXT image=gfb@--@;Accompanying image for penonpaper-c@right ilink=audio_2008 br= ilink=audio_2007 br= ilink=audio_2006 END_MAIN info=text<<END_TEXT I've been making PC based music since the early nineties - fortunately most of it only exists as tape recordings. END_TEXT ( http://jwm-art.net/dark.php?p=audio - There's just over 400 pages on there. ) *The jounal-entry form which takes some of the work out of it, has mysteriously broken. And it still required SSH access to copy the file to the main dat dir and to check I had actually remembered the format correctly and the code hadn't mis-formatted anything (which it always does). I don't want to drop all the old content (just some), but how much work would be involved in converting it, factoring into account I've been using Drupal for a day, have not written any PHP for a couple of years, and have zero knowledge of SQL? How might a team of developers tackle this? How do-able is it for one guy in his spare time?

    Read the article

  • Handling one-to-many relationship with ZF partialLoop

    - by snaken
    Lets say i'm listing musical artists, each artist has basic information like Name, Age etc. stored in an artist table. They also have entries in an Albums table (album name/album cover etc), referencing the artist table using the artist id as a foreign key. I have the Model_Artist (Artist.php) file: class Model_Artist extends Zend_Db_Table_Abstract { protected $_name = 'artist'; protected $_dependentTables = array('Model_ArtistAlbums'); public function fetchArtistss() { $select = $this->select(); return $this->fetchAll($select); } } and to the Model_ArtistAlbums (ArtistAlbums.php) file class Model_ArtistAlbums extends Zend_Db_Table_Abstract { protected $_name = 'albums'; protected $_referenceMap = array( 'Artists' => array( 'columns' => 'alb_art_id', 'refTableClass' => 'Model_Artist', 'refColumns' => 'art_id' ) ); // etc } in my controller: public function indexAction() { /* THIS WORKS $art = new Model_Artist(); $artRowset = $art->find(1); $art1 = $artRowset->current(); $artalbums = $art1->findDependentRowset('Model_ArtistAlbums'); foreach($artalbums as $album){ echo $album->alb_title."<br>"; } */ $arts = new Model_Artist(); $this->view->artists = $arts->fetchArtists(); } in the view file: $this->partial()->setObjectKey('artist'); echo $this->partialLoop('admin/list-artists.phtml', $this->artists); but with this code in artists/list-artists.phtml: foreach($this->artist->findDependentRowset('albums') as $album): // other stuff endforeach; i get this error: Fatal error: Call to a member function findDependentRowset() on a non-object A var_dump of $this->artist = NULL.

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

  • git: how to not delete files when rebasing commits with file deletion

    - by Benjol
    I have a branch that I would like to rebase onto the lastest commit on my master. The problem is that one of the intervening commits on master was to delete and ignore a particular set of files (see this question). If I just do a straight rebase, those files will get deleted again. Is there anyway of doing this, inside git, rather than copying all the files out by hand, then copying them back in again afterwards? Or should I do something like create a new branch off master, then merge in just the commits from the old branch? Attempts ascii art: master branch | w work in progress on branch C | committed further changes on master | | B / committed delete/ignore files on master | 2 committed changes on branch | / A / committed changes on master which I now need to get branch working | 1 committed changes on branch 0___/ created branch (Doing the art, I realise that I could just rebase branch from A, then merge when I've finished, but I'd still like to know if there's a way to do this 'properly') UPDATE Warning to anyone trying this. The solution proposed here is fine, but when you checkout master again, the B commit will be re-applied, and you lose all your files again :(

    Read the article

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >