Search Results

Search found 24675 results on 987 pages for 'design tools'.

Page 357/987 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Including the functionality of a tool within another program?

    - by darren
    Hi there I would like to write an application, for my own interest, that graphically visualizes some network concepts. Basically I would like to show the output from tools like ping, traceroute and nmap. The most obvious approach seems to be to use pipes to call out to these tools from my C program, and process the information they return. However, I would like to avoid this heavy-handed approach if possible. My question is, is it possible to somehow link against these tools, or are there APIs that can be used to gain programatic access instead? If so, is this behavior available on a tool-by-tool basis only? One reason for wanting to do this is to keep everything in a single process / address space and to avoid dependance on these external tools. For example, if I wrote an iphone application, I would not be able to spawn processes to call out to the external tools themselves. Thanks for any advice or suggestions.

    Read the article

  • Trying to override onbeforeunload in first step of form wizard

    - by Dirty Bird Design
    This pertains to a form wizard with 5 steps, I don't want the user to hit back and lose form data in any step 2-4. I have added a flag for the submit function and need to add this one for the first step. If they get there by accident and try and leave I dont want the cofirm dialog popping up. <script type="text/javascript"> $(document).ready(function(){ var action_is_post = false; $("form").submit(function () { action_is_post = true; }); //this is the trouble spot. on the first step the "navigation" of the form has a class of current (on step one = current) $(this).ready(function () { if ($("#stepDesc0").is(".current")) { action_is_post = true; } ); window.onbeforeunload = confirmExit; function confirmExit() { if (!action_is_post) return 'Using the browsers back, refresh or close button will cause you to lose all form data. Please use the Next and Back buttons on the form.'; } }); </script>

    Read the article

  • troulbe adding success flag to onbeforeunload

    - by Dirty Bird Design
    having issues with onbeforeunload. I have a long form broken into segments via a jquery wizard plug in. I need to pop a confirm dialog if you hit back, refresh, close etc on any step but need it to NOT POP the confirm dialog on click of the submit button. had it working, or at least I thought, it doesn't now. <script type="text/javascript"> var okToSubmit = false; window.onbeforeunload = function() { document.getElementById('Register').onclick = function() { okToSubmit = true; }; if(!okToSubmit) return "Using the browsers back button will cause you to lose all form data. Please use the Next and Back buttons on the form"; }; </script> 'Register' is the submit button ID. Please help!

    Read the article

  • applying a var to an if statement jquery

    - by Dirty Bird Design
    Need to apply a var to a statement if its conditions are met, this syntax isn't throwing errors but its not working. <script type="text/javascript"> $(document).ready(function(){ var action_is_post = false; //stuff here $(this).ready(function () { if ($("#stepDesc0").is(".current")) { action_is_post = true; } }); //stuff here </script> should I use something other than .ready? Do I even need the $(this).ready(function ()... part? I need it to apply the var when #stepDesc0 has the class current.

    Read the article

  • Display a summary of form element values before submit

    - by Dirty Bird Design
    Using jquery formtowizard.js with an ajax submit. I want the last step of the form to display a summary of all form fields that were filled out. I can get it to work in isolated test cases, but not in full use. Form <form id="Commission" method="post" action="PHP/CommissionsSubmit.php"> <fieldset id="Initial"> <legend>Enter Your Information</legend> <ul> <li> <label for="FName">First Name*</label><input type="text" name="FName" id="FName"> </li> //repeat many li's </ul> </fieldset> <fieldset> <legend>Second Step</legend> //more li's </fieldset> <fieldset> <legend>Confirmation</legend> <span id="CFName"></span> </fieldset> </form> the jquery to get "#CFName" value $('#FName').keyup(function() { $('#CFName').val($(this).val()); }); I can't get the value to appear in the span "#CFName"... Could this have to do with the "serialize" function or anything going on with my $ajax submit function? its happening before submit... Please help! I apologize, but I've gone back and forth with "#CFName" being a span and an input, using .val and .html respectively

    Read the article

  • Facebook call back function with timeout

    - by Dirty Bird Design
    So I've been getting my ass kicked pretty good with Facebook's moving target of an API. I need to display some hidden content after a person clicks 'like' on a landing page. I can somewhat get this to work, when the user clicks 'like' the normal fb dialogue appears and then goes away immediately and content is displayed. I have 'achieved' this with the following js. <script> FB.Event.subscribe('edge.create', function(href, widget) { document.getElementById('goodies').style.display = "block"; document.getElementById('fb-content').style.display= "block"; document.getElementById('copy').style.display = "none"; }); </script> I cannot find any documentation about a callback event after someone hits "post to facebook" or after the dialogue closes, only afte they hit like. How would I incorporate a setTimeout function into this to give people some time to fill out the fb dialogue? thanks. If anyone has a better way to do this I'm all ears. This is for a business page and I cannot seem to add an app to get an app ID anymore so the API is pretty useless to me at this point. Also, if the url to be liked is a fb page, the callbacks don't seem to fire. Other code used: <html xmlns:fb="http://ogp.me/ns/fb#"> <div id="fb-root"></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <fb:like href="onlynonfburl.com" send="false" layout="button_count" width="450" show_faces="false" font="arial"></fb:like>

    Read the article

  • problem with live function

    - by Dirty Bird Design
    I had this working to spec, until the specs changed. This function is now brought in via ajax .load. Easy enough to bring it in and I have all my other functions on the page that is brought in working in the parent page except this one: $("#CME").hide(); $(function() { $("#CME1, #CMEQL, #CBT1, #CBTQL, #NYM1, #CMX1").live("change", function(){ var checkBoxes = $("#CME1, #CMEQL, #CBT1, #CBTQL, #NYM1, #CMX1").filter(":not(:checked)"); if(checkBoxes.length == 0){ $("#CME").slideDown("fast"); } else { $("#CME").slideUp("fast"); } }); the div "#CME" is not hidden and the .live('change', function () { isn't working. I have other similar .live functions that are working and structured the same. How do I bind the initial $(function() with .live and why isn't the .hide() working? });//CME

    Read the article

  • A friend told me Python is garbage, I'm taking web design classes in the Spring and I have a textbook on C++. What should I do? [on hold]

    - by user107165
    I dont know if I should start digging into Python beforehand just to get acquanited with programming and "whet my appetite" or if I should work on the C++ book... Python definitely has more resources around town and I like the beginner friendly approach that seems to go along with every site that appeals to it. Or should I just wait for my assignments that start in 4 months? Any tips for an aspiring programmer?

    Read the article

  • Alert user when they hit the browser back button - with good reson

    - by Dirty Bird Design
    I know this borders on the taboo here, and please don't reply with "you should never do this etc" I have a very long form in a wizard, some users are too used to using the browsers back and forward buttons that they use those instead of the "Back" and "Next" buttons on the form wizard. If they hit the browsers back button they lose all of their form data which is a pain in the ass, since form is so long. Is it possible to display an alert that when will have a "take me out of here" and a "cancel" button so if they hit cancel it will cancel the function of the back button? Thanks!

    Read the article

  • Calculate a div's height with jQuery - minus header and footer

    - by Dirty Bird Design
    I'm using a sticky footer (negative margin solution) and it works fine. What I need to do is calculate the window's height, subtract the known height of the header and footer then apply that number and use it for the height of the main wrapper div. CSS solutions cause other issues, is there a good way to do this? var h = window.height(); var k = 300; //header is 100px footer is 200px $('#wrap').height(h-k); rough idea, pls help.

    Read the article

  • IE 7 floated div auto-clearing next element ?

    - by schweb-design-llc
    Hello, I'm having trouble with the following example. Background: I first have a element floated to the right with an image output inside it. I then have a element with other content within it. In FF and IE 8, as expected, the .images div floated to the right displays floated to the right pushing the content within the .product-body div to the left nicely. The problem: But when viewed in IE 7 compatibility mode, the .product-body div is cleared underneath the .images div and thus the .images div does not fall nicely to the right as expected. IT does this regardless of whether or not i have clear:none; on the .broduct-body div. Please see the example at www.hotelmarketingbudget.com Look at the source code there at the div element #content-body to see these divs. Feel free to use Firebug or IE Dev toolbar or whatnot to check this out. The relevant CSS: content-body{ width:auto; height:auto; position:relative; margin:0 auto; } .product-group .images { float:right; width:auto; height:auto; position:relative; margin:0 auto; margin-left:15px; } .product-group .product-body { width:auto; height:auto; position:relative; margin:0 auto; } I've spent hours already trying to figure out how to fix this- googling, reading other threads here on stackoverflow, but alas i cannot find any solutions and it's hard to know what words to even search with. I'm really hoping this is just some brain-fart on my part. Any advice or ideas or questions would be GREATLY appreciated!

    Read the article

  • Change $mailTo variable based on select input value (array)

    - by Dirty Bird Design
    I have the following select list: <form action="mail.php" method="POST"> <select name="foo" id="foo"> <option value="sales">Sales</option> <option value="salesAssist">Sales Assist</option> <option value="billing">Billing</option> <option value="billingAssist">Billing Assist</option> </select> </form> I need to route the $mailTo variable depending on which option they select, Sales and Sales Assist go to [email protected], while Billing and Billing Assist go to [email protected] PHP pseudeo code! <? php $_POST['foo'] if inArray(sales, salesAssist) foo="[email protected]"; else if inArray(billing, billingAssist) foo="[email protected]"; mailTo="foo" ?> I know there is nothing correct about the above, but you can see what I am trying to do, change a variable's value based on the selected value. I don't want to do this with JS, would rather learn more PHP here. Thank you.

    Read the article

  • Asyncronous While Loop?

    - by o7th Web Design
    I have a pretty great SqlDataReader wrapper in which I can map the output into a strongly typed list. What I am finding now is that on larger datasets with larger numbers of columns, performance could probably be a bit better if I can optimize my mapping. In thinking about this there is one section in particular that I am concerned about as it seems to be the heaviest hitter: while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); What I would really like to know, is if there is a way that I can make this loop asyncronous? I feel that will make all the difference in the world with this beast :) Here is the entire Map method in case anyone can see where I can make further improvements on it... IList<T> Map<T> // Map our datareader object to a strongly typed list private static IList<T> Map<T>(IDataReader _Rdr) where T : new() { try { Type _t = typeof(T); List<T> _en = new List<T>(); Hashtable _ht = new Hashtable(); PropertyInfo[] _props = _t.GetProperties(); Parallel.ForEach(_props, info => { _ht[info.Name.ToUpper()] = info; }); while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); return _en; }catch(Exception ex){ _Msg += "Wrapper.Map Exception: " + ex.Message; ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.Wrapper.Map", _Msg); return default(IList<T>); } }

    Read the article

  • jQuery center multiple dynamic images in different sized containers

    - by JVK Design
    So I've found similar questions but none that answer all the questions I have and I know there must be a simple jQuery answer to this. I've got multiple images that are being dynamically placed in their own containing div that have overflow:hidden, they need to fill their containing divs and be centered(horizontally and vertically) also. The containing divs will be different sizes as well. So in short: multiple different sized images fill and center in containing div. containing divs will be different sizes. will be used multiple times on a page. Hopefully this image helps explain what I'm after. Click here to view the image. HTML I'm using but can be changed <div class="imageHolder"> <div class="first SlideImage"> <img src="..." alt="..."/> </div> <div class="second SlideImage"> <img src="..." alt="..."/> </div> <div class="third SlideImage"> <img src="..." alt="..."/> </div> </div> And the CSS .imageHandler{ float:left; width:764px; height:70px; margin:1px 0px 0px; } .imageHolder .SlideImage{ float:left; position:relative; overflow:hidden; } .imageHolder .SlideImage img{ position:absolute; } .imageHolder .first.SlideImage{ width:381px; height:339px; margin-right:1px; } .imageHolder .second.SlideImage{margin-bottom:1px;} .imageHolder .second.SlideImage, .imageHolder .third.SlideImage { width: 382px; height: 169px; } Ask me anything if this doesn't make sense, thanks in advance

    Read the article

  • How can I call these urls in jquery to display content on one page?

    - by Thorbis Website Design
    ok I figured out the jquery part but not the parameters of them all can anyone help figure out the parameters for each url string? this is the jquery I figured out! also would this work better then what the below answer? $.get('adminajax.php', {'action':'getUsers'}, function(data){ $('#users .users').html(data); }); He sent me this in an email: You can specify a page by adding: p=[page #] You can specify a file and it will add a checkbox next to the user which will be checked if the user has permission to download: file=[file location] adminajax.php?action=createDirectory&directory=[new directory location] adminajax.php?action=setAvailability&user=[username]&file=[filelocation]&available=[true or false] I'm trying to get it to display in these html tags: <div id="files"> <b>Files:</b> <ul class="files"></ul> </div> <div id="file_options"> <b>Options:</b> </div> <div id="users"> <b>Users:</b> <ul class="users"></ul> </div>

    Read the article

  • Silverlight 4 Released

    - by ScottGu
    The final release of Silverlight 4 is now available. What is in the Silverlight 4 Release Silverlight 4 contains a ton of new features and capabilities.  In particular we focused on three scenarios with this release: Further enhancing media support Building great business applications Enabling out of the browser experiences On Tuesday I gave a 60 minute keynote about Silverlight 4 which showed off many of the new features and capabilities now available.  You can watch my keynote to learn more about Silverlight 4 and see a ton of great demos of it in action. Also check out these three great posts by Tim Heuer that talk about the new features and provide a guide to the new Silverlight 4 capabilities: Silverlight 4 Beta – A Guide to the New Features Silverlight 4 RC – What was updated Silverlight 4 Released Also read David Anson’s great Silverlight 4 Toolkit post to learn more about the new controls and functionality also available within the Silverlight Toolkit release we also made available today.  Also visit this page to learn more about the new Pivot functionality in Silverlight 4 – which makes it really easy to visualize and interact with collections of images using Silverlight. Lastly – make sure to visit the www.silverlight.net web-site and visit the “Get Started” section to find free tutorials that you can use. Download and Install Silverlight 4 Tools for VS 2010 To develop Silverlight 4 applications you should first download and install Visual Studio 2010 or download and install the free Visual Web Developer 2010 Express edition. Then install the Silverlight Tools RC2 for Visual Studio 2010.  This setup includes the Silverlight 4 Developer Runtime, Silverlight 4 SDK, RIA Services, and VS 2010 tools support.  Once installed you can do File->New Project and choose Silverlight Application to create your first Silverlight 4 project.  You can then use the new WYSIWYG Silverlight designer in Visual Studio 2010 to design and build rich Silverlight 4 applications. Important: If you previously installed the Silverlight 4 Beta or RC build on your machine, please make sure to go into Add/Remove programs and uninstall the “Update for Visual Studio 2010 (KB976272)” package prior to installing the Silverlight Tools RC2 for Visual Studio 2010 setup.  Note that while Silverlight 4 is released, the “Silverlight 4 Tools for VS 2010” is currently in “RC2” mode (meaning we are going to keep an eye out for any remaining issues before finally calling it done).  We’ll update the tools to be “final” in a few weeks once we verify that no last minute issues/bugs remain. Download and Install Expression Blend 4 Release Candidate You can also download and install the Expression Blend 4 RC to create and design great Silverlight 4 applications.  Blend contains “Sketchflow” support – which makes it really easy to rapidly prototype ideas and applications.  To learn more about Sketchflow watch this 90 second video of it in action. Summary Today’s release is the fourth release of Silverlight that we’ve shipped in the last 2.5 years.  The team has done a great job of advancing it quickly and staying focused.  We think today’s Silverlight 4 release opens up a ton of new opportunities to build great solutions for both consumers and business scenarios.  We are looking forward to seeing what you build with it! Hope this helps, Scott

    Read the article

  • Code is not the best way to draw

    - by Bertrand Le Roy
    It should be quite obvious: drawing requires constant visual feedback. Why is it then that we still draw with code in so many situations? Of course it’s because the low-level APIs always come first, and design tools are built after and on top of those. Existing design tools also don’t typically include complex UI elements such as buttons. When we launched our Touch Display module for Netduino Go!, we naturally built APIs that made it easy to draw on the screen from code, but very soon, we felt the limitations and tedium of drawing in code. In particular, any modification requires a modification of the code, followed by compilation and deployment. When trying to set-up buttons at pixel precision, the process is not optimal. On the other hand, code is irreplaceable as a way to automate repetitive tasks. While tools like Illustrator have ways to repeat graphical elements, they do so in a way that is a little alien and counter-intuitive to my developer mind. From these reflections, I knew that I wanted a design tool that would be structurally code-centric but that would still enable immediate feedback and mouse adjustments. While thinking about the best way to achieve this goal, I saw this fantastic video by Bret Victor: The key to the magic in all these demos is permanent execution of the code being edited. Whenever a parameter is being modified, everything is re-executed immediately so that the impact of the modification is instantaneously visible. If you do this all the time, the code and the result of its execution fuse in the mind of the user into dual representations of a single object. All mental barriers disappear. It’s like magic. The tool I built, Nutshell, is just another implementation of this principle. It manipulates a list of graphical operations on the screen. Each operation has a nice editor, and translates into a bit of code. Any modification to the parameters of the operation will modify the bit of generated code and trigger a re-execution of the whole program. This happens so fast that it feels like the drawing reacts instantaneously to all changes. The order of the operations is also the order in which the code gets executed. So if you want to bring objects to the front, move them down in the list, and up if you want to move them to the back: But where it gets really fun is when you start applying code constructs such as loops to the design tool. The elements that you put inside of a loop can use the loop counter in expressions, enabling crazy scenarios while retaining the real-time edition features. When you’re done building, you can just deploy the code to the device and see it run in its native environment: This works thanks to two code generators. The first code generator is building JavaScript that is executed in the browser to build the canvas view in the web page hosting the tool. The second code generator is building the C# code that will run on the Netduino Go! microcontroller and that will drive the display module. The possibilities are fascinating, even if you don’t care about driving small touch screens from microcontrollers: it is now possible, within a reasonable budget, to build specialized design tools for very vertical applications. Direct feedback is a powerful ally in many domains. Code generation driven by visual designers has become more approachable than ever thanks to extraordinary JavaScript libraries and to the powerful development platform that modern browsers provide. I encourage you to tinker with Nutshell and let it open your eyes to new possibilities that you may not have considered before. It’s open source. And of course, my company, Nwazet, can help you develop your own custom browser-based direct feedback design tools. This is real visual programming…

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • The theory of evolution applied to software

    - by Michel Grootjans
    I recently realized the many parallels you can draw between the theory of evolution and evolving software. Evolution is not the proverbial million monkeys typing on a million typewriters, where one of them comes up with the complete works of Shakespeare. We would have noticed by now, since the proverbial monkeys are now blogging on the Internet ;-) One of the main ideas of the theory of evolution is the balance between random mutations and natural selection. Random mutations happen all the time: millions of mutations over millions of years. Most of them are totally useless. Some of them are beneficial to the evolved species. Natural selection favors the beneficially mutated species. Less beneficial mutations die off. The mutated rabbit doesn't have to be faster than the fox. It just has to be faster than the other rabbits.   Theory of evolution Evolving software Random mutations happen all the time. Most of these mutations are so bad, the new species dies off, or cannot reproduce. Developers write new code all the time. New ideas come up during the act of writing software. The really bad ones don't get past the stage of idea. The bad ones don't get committed to source control. Natural selection favors the beneficial mutated species Good ideas and new code gets discussed in group during informal peer review. Less than good code gets refactored. Enhanced code makes it more readable, maintainable... A good set of traits makes the species superior to others. It becomes widespread A good design tends to make it easier to add new features, easier to understand the current implementations, easier to optimize for performance...thus superior. The best designs get carried over from project to project. They appear in blogs, articles and books about principles, patterns and practices.   Of course the act of writing software is deliberate. This can hardly be called random mutations. Though it sometimes might seem that code evolves through a will of its own ;-) Does this mean that evolving software (evolution) is better than a big design up front (creationism)? Not necessarily. It's a false idea to think that a project starts from scratch and everything evolves from there. Everyone carries his experience of what works and what doesn't. Up front design is necessary, but is best kept simple and minimal, just enough to get you started. Let the good experiences and ideas help to drive the process, whether they come from you or from others, from past experience or from the most junior developer on your team. Once again, balance is the keyword. Balance design up front with evolution on a daily basis. How do you know what balance is right? Through your own experience of what worked and what didn't (here's evolution again). Notes: The evolution of software can quickly degenerate without discipline. TDD is a discipline that leaves little to chance on that part. Write your test to describe the new behavior. Write just enough code to make it behave as specified. Refactor to evolve the code to a higher standard. The responsibility of good design rests continuously on each developers' shoulders. Promiscuous pair programming helps quickly spreading the design to the whole team.

    Read the article

  • Is a university education really worth it for a good programmer?

    - by Jon Purdy
    The title says it all, but here's the personal side of it: I've been doing design and programming for about as long as I can remember. If there's a programming problem, I can figure it out. (Though admittedly StackOverflow has allowed me to skip the figuring out and get straight to the doing in many instances.) I've made games, esoteric programming languages, and widgets and gizmos galore. I'm currently working on a general-purpose programming language. There's nothing I do better than programming. However, I'm just as passionate about design. Thus when I felt leaving high school that my design skills were lacking, I decided to attend university for New Media Design and Imaging, a digital design-related major. For a year, I diligently studied art and programmed in my free time. As the next year progressed, however, I was obligated to take fewer art and design classes and more technical classes. The trouble was of course that these classes were geared toward non-technical students, and were far beneath my skill level at the time. No amount of petitioning could overcome the institution's reluctance to allow me to test out of such classes, and the major offered no promise for any greater challenge in the future, so I took the extreme route: I switched into the technical equivalent of the major, New Media Interactive Development. A lot of my credits moved over into the new major, but many didn't. It would have been infeasible to switch to a more rigorous technical major such as Computer Science, and having tutored Computer Science students at every level here, I doubt I would be exposed to anything that I haven't already or won't eventually find out on my own, since I'm so involved in the field. I'm now on track to graduate perhaps a year later than I had planned, which puts a significant financial strain on my family and my future self. My schedule continues to be bogged down with classes that are wholly unnecessary for me to take. I'm being re-introduced to subjects that I've covered a thousand times over, simply because I've always been interested in it all. And though I succeed in avoiding the cynical and immature tactic of failing to complete work out of some undeserved sense of superiority, I'm becoming increasingly disillusioned by the lack of intellectual stimulation. Further, my school requires students to complete a number of quarters of co-op work experience proportional to their major. My original major required two quarters, but my current requires three, delaying my graduation even more. To top it all off, college is putting a severe strain on my relationship with my very close partner of a few years, so I've searched diligently for co-op jobs in my area, alas to no avail. I'm now in my third year, and approaching that point past which I can no longer handle this. Either I keep my head down, get a degree no matter what it takes, and try to get a job with a company that will pay me enough to do what I love that I can eventually pay off my loans; or I cut my losses now, move wherever there is work, and in six months start paying off what debt I've accumulated thus far. So the real question is: is a university education really more than just a formality? It's a big decision, and one I can't make lightly. I think this is the appropriate venue for this kind of question, and I hope it sticks around for the sake of others who might someday find themselves in similar situations. My heartfelt thanks for reading, and in advance for your help.

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >