Search Results

Search found 1527 results on 62 pages for 'breaks'.

Page 53/62 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Blank Mail from PHP application

    - by brettlwilliams
    Problem: Blank email from PHP web application. Confirmed: App works in Linux, has various problems in Windows server environment. Blank emails are the last remaining problem. PHP Version 5.2.6 on the server I'm a librarian implementing a PHP based web application to help students complete their assignments.I have installed this application before on a Linux based free web host and had no problems. Email is controlled by two files, email_functions.php and email.php. While email can be sent, all that is sent is a blank email. My IT department is an ASP only shop, so I can get little to no help there. I also cannot install additional libraries like PHPmail or Swiftmailer. You can see a functional copy at http://rpc.elm4you.org/ You can also download a copy from Sourceforge from the link there. Thanks in advance for any insight into this! email_functions.php <?php /********************************************************** Function: build_multipart_headers Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Creates email headers for a message of type multipart/mime This will include a plain text part and HTML. **********************************************************/ function build_multipart_headers($boundary_rand) { global $EMAIL_FROM_DISPLAY_NAME, $EMAIL_FROM_ADDRESS, $CALC_PATH, $CALC_TITLE, $SERVER_NAME; // Using \n instead of \r\n because qmail doubles up the \r and screws everything up! $crlf = "\n"; $message_date = date("r"); // Construct headers for multipart/mixed MIME email. It will have a plain text and HTML part $headers = "X-Calc-Name: $CALC_TITLE" . $crlf; $headers .= "X-Calc-Url: http://{$SERVER_NAME}/{$CALC_PATH}" . $crlf; $headers .= "MIME-Version: 1.0" . $crlf; $headers .= "Content-type: multipart/alternative;" . $crlf; $headers .= " boundary=__$boundary_rand" . $crlf; $headers .= "From: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Sender: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Reply-to: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Return-Path: $EMAIL_FROM_DISPLAY_NAME <$EMAIL_FROM_ADDRESS>" . $crlf; $headers .= "Date: $message_date" . $crlf; $headers .= "Message-Id: $boundary_rand@$SERVER_NAME" . $crlf; return $headers; } /********************************************************** Function: build_multipart_body Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Builds the email body content to go with the headers from build_multipart_headers() **********************************************************/ function build_multipart_body($plain_text_message, $html_message, $boundary_rand) { //$crlf = "\r\n"; $crlf = "\n"; $boundary = "__" . $boundary_rand; // Begin constructing the MIME multipart message $multipart_message = "This is a multipart message in MIME format." . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/plain; charset=\"us-ascii\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $plain_text_message . $crlf . $crlf; $multipart_message .= "--{$boundary}{$crlf}Content-type: text/html; charset=\"iso-8859-1\"{$crlf}Content-Transfer-Encoding: 7bit{$crlf}{$crlf}"; $multipart_message .= $html_message . $crlf . $crlf; $multipart_message .= "--{$boundary}--$crlf$crlf"; return $multipart_message; } /********************************************************** Function: build_step_email_body_text Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Returns a plain text version of the email body to be used for individually sent step reminders **********************************************************/ function build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $teacher_info ,$name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $step_email_body =<<<BODY $CALC_TITLE Step $stepnum: {$arr_instructions["step$stepnum"]["title"]} Name: $name Class: $class BODY; $step_email_body .= build_text_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .= "\n\n"; $step_email_body .=<<<FOOTER The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! If you would like to stop receiving further reminders for this project, click the link below: http://$SERVER_NAME/$CALC_PATH/deleteproject.php?proj=$project_id FOOTER; // Wrap text to 78 chars per line // Convert any remaining HTML <br /> to \r\n // Strip out any remaining HTML tags. $step_email_body = strip_tags(linebreaks_html2text(wordwrap($step_email_body, 78, "\n"))); return $step_email_body; } /********************************************************** Function: build_step_email_body_html Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Same as above, but with HTML **********************************************************/ function build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $teacher_info, $name, $class, $project_id) { global $CALC_PATH, $CALC_TITLE, $SERVER_NAME; $styles = build_html_styles(); $step_email_body =<<<BODY <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> BODY; $step_email_body .= build_html_single_step($stepnum, $arr_instructions, $dates, $query_string, $teacher_info); $step_email_body .=<<<FOOTER <p> The $CALC_TITLE offers suggestions, but be sure to check with your teacher to find out the best working schedule for your assignment! </p> <p> If you would like to stop receiving further reminders for this project, <a href="http://{$SERVER_NAME}/$CALC_PATH/deleteproject.php?proj=$project_id">click this link.</a> </p> </body> </html> FOOTER; return $step_email_body; } /********************************************************** Function: build_html_styles Author: Michael Berkowski Last Modified: September 2007 *********************************************************** Purpose: Just returns a string of <style /> for the HTML message body **********************************************************/ function build_html_styles() { $styles =<<<STYLES <style type="text/css"> body { font-family: Arial, sans-serif; font-size: 85%; } h1 { font-size: 120%; } table { border: none; } tr { vertical-align: top; } img { display: none; } hr { border: 0; } </style> STYLES; return $styles; } /********************************************************** Function: linebreaks_html2text Author: Michael Berkowski Last Modified: October 2007 *********************************************************** Purpose: Convert <br /> html tags to \n line breaks **********************************************************/ function linebreaks_html2text($in_string) { $out_string = ""; $arr_br = array("<br>", "<br />", "<br/>"); $out_string = str_replace($arr_br, "\n", $in_string); return $out_string; } ?> email.php <?php require_once("include/config.php"); require_once("include/instructions.php"); require_once("dbase/dbfunctions.php"); require_once("include/email_functions.php"); ini_set("sendmail_from", "[email protected]"); ini_set("SMTP", "mail.qatar.net.qa"); // Verify that the email has not already been sent by checking for a cookie // whose value is generated each time the form is loaded freshly. if (!(isset($_COOKIE['rpc_transid']) && $_COOKIE['rpc_transid'] == $_POST['transid'])) { // Setup some preliminary variables for email. // The scanning of $_POST['email']already took place when this file was included... $to = $_POST['email']; $subject = $EMAIL_SUBJECT; $boundary_rand = md5(rand()); $mail_type = ""; switch ($_POST['reminder-type']) { case "progressive": $arr_dbase_dates = array(); $conn = rpc_connect(); if (!$conn) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } // Sanitize all the data that will be inserted into table... // We need to remove "CONTENT-TYPE:" from name/class to defang them. // Additionall, we can't allow any line-breaks in those fields to avoid // hacks to email headers. $ins_name = mysql_real_escape_string($name); $ins_name = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_name); $ins_name = str_replace("\n", "", $ins_name); $ins_class = mysql_real_escape_string($class); $ins_class = eregi_replace("CONTENT-TYPE", "...Content_Type...", $ins_class); $ins_class = str_replace("\n", "", $ins_class); $ins_email = mysql_real_escape_string($email); $ins_teacher_info = $teacher_info ? "YES" : "NO"; switch ($format) { case "Slides": $ins_format = "SLIDES"; break; case "Video": $ins_format = "VIDEO"; break; case "Essay": default: $ins_format = "ESSAY"; break; } // The transid from the previous form will be used as a project identifier // Steps will be grouped by project identifier. $ins_project_id = mysql_real_escape_string($_POST['transid'] . md5(rand())); $arr_dbase_dates = dbase_dates($dates); $arr_past_dates = array(); // Iterate over the dates array and build a SQL statement for each one. $insert_success = TRUE; // $min_reminder_date = date("Ymd", mktime(0,0,0,date("m"),date("d")+$EMAIL_REMINDER_DAYS_AHEAD,date("Y"))); for ($date_index = 0; $date_index < sizeof($arr_dbase_dates); $date_index++) { // Make sure we're using the right keys... $ins_date_index = $date_index + 1; // The insert will only happen if the date of the event is in the future. // For dates today and earlier, no insert. // For dates today or after the reminder deadline, we'll send the email immediately after the inserts. if ($arr_dbase_dates[$date_index] > (int)$min_reminder_date) { $qry =<<<QRY INSERT INTO email_queue ( NOTIFICATION_ID, PROJECT_ID, EMAIL, NAME, CLASS, FORMAT, TEACHER_INFO, STEP, MESSAGE_DATE ) VALUES ( NULL, '$ins_project_id', '$ins_email', '$ins_name', '$ins_class', '$ins_format', '$ins_teacher_info', $ins_date_index, /*step number*/ {$arr_dbase_dates[$date_index]} /* Date in the integer format yyyymmdd */ ) QRY; // Attempt to do the insert... $result = mysql_query($qry); // If even one insert fails, bail out. if (!$result) { $mail_success = FALSE; $mail_status_message = "Could not register address!"; break; } } // For dates today or earlier, store the steps=>dates in an array so the mails can // be sent immediately. else { $arr_past_dates[$ins_date_index] = $arr_dbase_dates[$date_index]; } } // Close the connection resources. mysql_close($conn); // SEND OUT THE EMAILS THAT HAVE TO GO IMMEDIATELY... // This should only be step 1, but who knows... //var_dump($arr_past_dates); for ($stepnum=1; $stepnum<=sizeof($arr_past_dates); $stepnum++) { $email_teacher_info = ($teacher_info && $EMAIL_TEACHER_REMINDERS) ? TRUE : FALSE; $boundary = md5(rand()); $plain_text_body = build_step_email_body_text($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $html_body = build_step_email_body_html($stepnum, $arr_instructions, $dates, $query_string, $email_teacher_info ,$name, $class, $ins_project_id); $multipart_headers = build_multipart_headers($boundary); $multipart_body = build_multipart_body($plain_text_body, $html_body, $boundary); mail($to, $subject . ": Step " . $stepnum, $multipart_body, $multipart_headers, "[email protected]"); } // Set appropriate flags and messages $mail_success = TRUE; $mail_status_message = "Email address registered!"; $mail_type = "progressive"; set_mail_success_cookie(); break; // Default to a single email message. case "single": default: // We don't want to send images in the message, so strip them out of the existing structure. // This big ugly regex strips the whole table cell containing the image out of the table. // Must find a better solution... //$email_table_html = eregi_replace("<td class=\"stepImageContainer\" width=\"161px\">[\s\r\n\t]*<img class=\"stepImage\" src=\"images/[_a-zA-Z0-9]*\.gif\" alt=\"Step [1-9]{1} logo\" />[\s\r\n\t]*</td>", "\n", $table_html); // Show more descriptive text based on the value of $format switch ($format) { case "Video": $format_display = "Video"; break; case "Slides": $format_display = "Presentation with electronic slides"; break; case "Essay": default: $format_display = "Essay"; break; } $days = (int)$days; $html_message = ""; $styles = build_html_styles(); $html_message =<<<HTMLMESSAGE <html> <head> <title> $CALC_TITLE </title> $styles </head> <body> <h1> $CALC_TITLE Schedule </h1> <strong>Name:</strong> $name <br /> <strong>Class:</strong> $class <br /> <strong>Email:</strong> $email <br /> <strong>Assignment type:</strong> $format_display <br /><br /> <strong>Starting on:</strong> $date1 <br /> <strong>Assignment due:</strong> $date2 <br /> <strong>You have $days days to finish.</strong><br /> <hr /> $email_table_html </body> </html> HTMLMESSAGE; // Create the plain text version of the message... $plain_text_message = strip_tags(linebreaks_html2text(build_text_all_steps($arr_instructions, $dates, $query_string, $teacher_info))); // Add the title, since it doesn't get built in by build_text_all_steps... $plain_text_message = $CALC_TITLE . " Schedule\n\n" . $plain_text_message; $plain_text_message = wordwrap($plain_text_message, 78, "\n"); $multipart_headers = build_multipart_headers($boundary_rand); $multipart_message = build_multipart_body($plain_text_message, $html_message, $boundary_rand); $mail_success = FALSE; if (mail($to, $subject, $multipart_message, $multipart_headers, "[email protected]")) { $mail_success = TRUE; $mail_status_message = "Email sent!"; $mail_type = "single"; set_mail_success_cookie(); } else { $mail_success = FALSE; $mail_status_message = "Could not send email!"; } break; } } function set_mail_success_cookie() { // Prevent the mail from being resent on page reload. Set a timestamp cookie. // Expires in 24 hours. setcookie("rpc_transid", $_POST['transid'], time() + 86400); } ?>

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to Collect Debug Info for Oracle SQL Developer

    - by thatjeffsmith
    In a perfect world, there would be no software bugs. Developers would always test their code. QA would find any scenarios and bugs the developers hadn’t already thought of. Regression tests would be complete and flawless. But alas, we can only afford to pay mere humans here, so we will have bugs from time to time. Or sometimes you are trying to do something the software wasn’t designed for, or perhaps your machine has exhausted it’s resources trying to build the un-buildable. When you run into problems, you will need help. Developers need your help so they can help you. Surprisingly enough, feedback like this isn’t very helpful: Your program isn’t working. How can I make it work? When you are ready to work with us on the SQL Developer OTN forum, you will most likely be asked to run SQL Developer and capture the output from the command console. In case you need help with this, ere’s a step-by-step process you can follow in Windows 7 (should work in XP too.) Open a windows command window Start – Run – CMD Once it’s open, click on the window icon and select ‘Defaults.’ Change the default buffer size to be something bigger, much bigger. Set the CMD window default buffer size HIGHER Note: you only need to do this once. Navigate to your SQL Developer Installation Folder Instead of running the ‘sqldeveloper.exe’ file in the root directory, we are going to go several sub-directories down. Find the ‘bin’ sub-directory and run the ‘sqldeveloper.exe’ there. When you do this, a CMD window will open, and then you’ll see the SQL Developer application load. The SQL Developer bin directory - run the tool from here and get a logging window Use SQL Developer as normal, until it ‘breaks’ or ‘hangs’ Now, you are ready to grab the nitty-gritty information that MIGHT tell the developer what is going wrong or happening in your scenario. Click back into the CMD window Send a Ctrl+Break or a Ctrl+Pause. If you on a newer laptop that doesn’t have this key, be sure to check the ‘Fn’ subset of keys. If you need to map the BREAK or PAUSE buttons, this article might help. You can also try the on-screen keyboard in windows – just type ‘OSK’ in your START – RUN prompt. Copy the logging information from the command window – all of it We need this information, help us get it! Open a case with Oracle Support or Start a Thread on the Forums Or email me. If you’re on my blog reading this, it’s the least I can do to help Now, before you hit ‘Send’ or ‘Post’ or ‘Submit’ – be sure to add a brief description of what you were doing in the application when you ran into the problem. Even if you were doing ‘nothing,’ let us know how many connections you had open, what windows were active, etc. The more you can tell us, the higher your odds go up to getting a quick fix or at least an answer as to what is happening. Also include the following information: The version of SQL Developer you are running The version of the JDK you are using The OS you are using The version of Oracle you are connected to Now, don’t be surprised if you get asked to upgrade to a supported configuration, say ‘version 3.1 and the 1.6 JDK.’ Supporting older versions of software is fun, and while we enjoy a challenge, it may be easier for you to upgrade your way out of the problem at hand.

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • Announcing the June 2012 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the June 2012 release of the Ajax Control Toolkit. You can download the new release by visiting http://AjaxControlToolkit.CodePlex.com or (better) download the new release with NuGet: Install-Package AjaxControlToolkit The Ajax Control Toolkit continues to be super popular. The previous release (May 2012) had over 87,000 downloads from CodePlex.com and over 16,000 downloads from NuGet. That’s over 100,000 downloads in less than 2 months. Security Improvements for the HtmlEditorExtender Unfortunately, in the previous release, we made the HtmlEditorExtender too secure! We upgraded the version of the Microsoft Anti-Cross Site Scripting Library included in the Ajax Control Toolkit to the latest version (version 4.2.1) and the latest version turned out to be way too aggressive about stripping HTML. It not only strips dangerous tags such as <script> tags, it also strips innocent tags such as <b> tags. When the latest version of the Microsoft Anti-Cross Site Scripting Library is used with the HtmlEditorExtender, the library strips all rich content from the HtmlEditorExtender control which defeats the purpose of using the control. Therefore, we had to find a replacement for the Microsoft Anti-Cross Site Scripting Library. In this release, we’ve created a new HTML sanitizer built on the HTML Agility Pack. If you were using the AntiXssSanitizerProvider then you will need to substitute the HtmlAgilityPackSanitizerProvider. In particular, you need to modify the sanitizer sections in your Web.config file like this: <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit" /> </sectionGroup> </configSections> <system.web> <sanitizer defaultProvider="HtmlAgilityPackSanitizerProvider"> <providers> <add name="HtmlAgilityPackSanitizerProvider" type="AjaxControlToolkit.Sanitizer.HtmlAgilityPackSanitizerProvider"></add> </providers> </sanitizer> </system.web> </configuration> We made one other backwards-breaking change to improve the security of the HtmlEditorExtender. We want to make sure that users don’t accidently use the HtmlEditorExtender without an HTML sanitizer by accident. Therefore, if you don’t configure a HTML sanitizer provider in the web.config file then you’ll get the following error: If you really want to use the HtmlEditorExtender without using an HTML sanitizer – for example, you are using the HtmlEditorExtender for an Intranet application and you trust all of your fellow employees – then you can explicitly indicate that you don’t want to enable HTML sanitization by setting the EnableSanitization property to false like this: <ajaxToolkit:HtmlEditorExtender TargetControlID="txtComments" EnableSanitization="false" runat="server" /> Please don’t ever set the EnableSanitization property to false for a public website. If you disable HTML sanitization then you are making your website an easy target for Cross-Site Scripting attacks. Lots of Fixes for the ComboBox Control In the latest release, we also made several important bug fixes and feature enhancements to the ComboBox control. Here’s the list of issues that we fixed: 22930 — ComboBox doesn’t close its drop down list when losing input focus to another ComboBox control 23140 — ComboBox Issues – Delete, Backspace, Period 23142 — ComboxBox SelectedIndex = -1 does not clear text 24440 — ComboBox postback on enter 25295 — ComboBox problems when container is hidden at page load 25469 — ComboBox – MaxLength ignored 26686 — Backspace and Delete exception when optionList is null 27148 — Combobox breaks if ClientIDMode is static Fixes to Other Controls In this release, we also made bug fixes and enhancements to the UpdatePanelAnimation, Tabs, and Seadragon controls: 21310 — OnUpdated animation starts before OnUpdating has finished 26690 — Seadragon Control’s openTileSource() method doesn’t work (with fix) Title is required We also fixed an issue with the Tabs control which would result in an InvalidOperation exception. Summary I want to thank the Superexpert team for the hard work that they put into this release. In particular, I want to thank them for their effort in researching, building, and writing unit tests for the HtmlAgilityPack HTML sanitizer.

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Improve the Quality of ePub eBooks with Sigil

    - by Matthew Guay
    Would you like to correct errors in your ePub formatted eBooks, or even split them into chapters and create a Table of Contents?  Here’s how you can with the free program Sigil. eBooks are increasingly popular with the rise of eBook readers and reading apps on mobile devices.  We recently showed you how to convert a PDF eBook to ePub format, but as you may have noticed, sometimes the converted file had some glitches or odd formatting.  Additionally, many of the many free ePub books available online from sources like the Project Guttenberg do not include a table of contents.  Sigil is a free application for Windows, OS X, and Linux that lets you edit ePub files, so let’s look at how you can use it to improve your eBooks. Note: Sigil took several moments to open files in our tests, and froze momentarily when we maximized the window.  Sigil is currently pre-release software in active development, so we would expect the bugs to be worked out in future versions.  As usual, only install if you’re comfortable testing pre-release software. Getting Started Download Sigil (link below), making sure to select the correct version for your computer.  Run the installer, and select your preferred setup language when prompted. After a moment the installer will appear; setup as normal. Launch Sigil when it’s finished installing.  It opens with a default blank ePub file, so you could actually start writing an eBook from scratch right here. Edit Your ePub eBooks Now you’re ready to edit your ePub books.  Click Open and browse to the file you want to edit. Now you can double-click any of the HTML or XHTML files on the left sidebar and edit them just like you would in Word. Or you can choose to view it in Code View and edit the actual HTML directly. The sidebar also gives you access to the other parts of the ePub file, such as Images and CSS styles. If your ePub file has a Table of Contents, you can edit it with Sigil as well.  Click Tools in the menu bar, and then select TOC Editor.  Strangely there is no way to create a new table of contents, but you can remove entries from existing one.   Convert TXT Files to ePub Many free eBooks online, especially older, out of copyright titles, are available in plain text format.  One problem with these files is that they usually use hard returns at the end of lines, so they don’t reflow to fill your screen efficiently. Sigil can easily convert these files to the more useful ePub format.  Open the text file in Sigil, and it will automatically reflow the text and convert it ePub.  As you can see in the screenshot below, the text in the eBook does not have hard line-breaks at the end of each line, and will be much more readable on mobile devices. Note that Sigil may take several moments opening the book, and may even become unresponsive while analyzing it.   Now you can edit your eBook, split it into chapters, or just save it as is.  Either way, make sure to select Save as to save your book as ePub format. Conclusion As mentioned before, Sigil seems to run slow at times, especially when editing large eBooks.  But it’s still a nice solution to edit and extend your ePub eBooks, and even convert plain text eBooks to the nicer ePub format.  Now you can make your eBooks work just like you want, and read them on your favorite device! If you feel comfortable editing HTML files, check out our article on how to edit ePub eBooks with your favorite HTML editor. Link Download Sigil from Google Code Download free ePub eBooks from Project Guttenberg Similar Articles Productive Geek Tips Edit ePub eBooks with Your Favorite HTML EditorConvert a PDF eBook to ePub Format for Your iPad, iPhone, or eReaderRead Mobi eBooks on Kindle for PCFriday Fun: Watch HD Video Content with MeevidPreview and Purchase Ebooks with Kindle for PC TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Not so long ago in a city not so far away by Carlos Martin

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 This is the story of how the EMEA Presales Center turned an Oracle intern into a trusted technology advisor for both Oracle’s Sales and customers. It was the summer of 2011 when I was finishing my Computer Engineering studies as well as my internship at Oracle when I was offered what could possibly be THE dream job for any young European Computer Engineer. Apart from that, it also seemed like the role was particularly tailored to me as I could leverage almost everything I learned at University and during the internship. And all of it in one of the best cities to live in, not only from my home country but arguably from Europe: Malaga! A day at EPC As part of the EPC Technology pillar, and later on completely focused on WebCenter, there was no way to describe a normal day on the job as each day had something unique. Some days I was researching documentation in order to elaborate accurate answers for a customer’s question within a Request for Information or Proposal (RFI/RFP), other days I was doing heavy programming in order to bring a Proof of Concept (PoC) for a customer to life and last not but least, some days I presented to the customer via webconference the demo I built for them the past weeks. So as you can see, the role has research, development and presentation, could you ask for more? Well, don’t worry because there IS more! Internationality As the organization’s name suggests, EMEA Presales Center, it is the Center of Presales within Europe, Middle East and Africa so I got the chance to work with great professionals from all this regions, expanding my network and learning things from one country to apply them to others. In addition to that, the teams based in the Malaga office are comprised of many young professionals hailing mainly from Western and Central European countries (although there are a couple of exceptions!) with very different backgrounds and personalities which guaranteed many laughs and stories during lunch or coffee breaks (or even while working on projects!). Furthermore, having EPC offices in Bucharest and Bangalore and thanks to today’s tele-presence technologies, I was working every day with people from India or Romania as if they were sitting right next to me and the bonding with them got stronger day by day. Career development Apart from the research and self-study I’ve earlier mentioned, one of the EPC’s Key Performance Indicators (KPI) is that 15% of your time is spent on training so you get lots and lots of trainings in order to develop both your technical product knowledge and your presentation, negotiation and other soft skills. Sometimes the training is via webcast, sometimes the trainer comes to the office and sometimes, the best times, you get to travel abroad in order to attend a training, which also helps you to further develop your network by meeting face to face with many people you only know from some email or instant messaging interaction. And as the months go by, your skills improving at a very fast pace, your relevance increasing with each new project you successfully deliver, it’s only a matter of time (and a bit of self-promoting!) that you get the attention of the manager of a more senior team and are offered the opportunity to take a new step in your professional career. For me it took 2 years to move to my current position, Technology Sales Consultant at the Oracle Direct organization. During those 2 years I had built a good relationship with the Oracle Direct Spanish sales and sales managers, who are also based in the Malaga office. I supported their former Sales Consultant in a couple of presentations and demos and were very happy with my overall performance and attitude so even before the position got eventually vacant, I got a heads-up from then in advance that their current Sales Consultant was going to move to a different position. To me it felt like a natural step, same as when I joined EPC, I had at least a 50% of the “homework” already done but wanted to experience that extra 50% to add new product and soft skills to my arsenal. The rest is history, I’ve been in the role for more than half a year as I’m writing this, achieved already some important wins, gained a lot of trust and confidence in front of customers and broadened my view of Oracle’s Fusion Middleware portfolio. I look back at the 2 years I spent in EPC and think: “boy, I’d recommend that experience to absolutely anyone with the slightest interest in IT, there are so many different things you can do as there are different kind of roles you can end up taking thanks to the experience gained at EPC” /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle SQL Developer v3.2.1 Now Available

    - by thatjeffsmith
    Oracle SQL Developer version 3.2.1 is now available. I recommend that everyone now upgrade to this release. It features more than 200 bug fixes, tweaks, and polish applied to the 3.2 edition. The high profile bug fixes submitted by customers and users on our forums are listed in all their glory for your review. I want to highlight a few of the changes though, as I recognize many of you lack the time and/or patience to ‘read the docs.’ That would include me, which is why I enjoy writing these kinds of blog posts. I’m lazy – just like you! No more artificial line breaks between CREATE OR REPLACE and your PL/SQL In versions 3.2 and older, when you pull up your stored procedural objects in our editor, you would see a line break inserted between the CREATE OR REPLACE and then the body of your code. In version 3.2.1, we have removed the line break. 3.1 3.2.1 Trivia Did You Know? The database doesn’t store the ‘CREATE’ or ‘CREATE OR REPLACE’ bit of your PL/SQL code in the database. If we look at the USER_SOURCE view, we can see that the code begins with the object name. So the CREATE OR REPLACE bit is ‘artificial’ The intent is to give you the code necessary to recreate your object – and have it ‘compile’ into the database. We pretty much HAVE to add the ‘CREATE OR REPLACE.’ From now on it will appear inline with the first line of your code. Exporting Tables & Views When exporting data from your tables or views, previous versions of SQL Developer presented a 3 step wizard. It allows you to choose your columns and apply data filters for what is exported. This was kind of redundant. The grids already allowed you to select your columns and apply filters. Wouldn’t it be more intuitive AND efficient to just make the grids behave in a What You See Is What You Get (WYSIWYG) fashion? In version 3.2.1, that is exactly what will happen. The wizard now only has two steps and the grid will export the data and columns as defined in the visible grid. Let the grid properties define what is actually exported! And here is what is pasted into my worksheet: "BREWERY"|"CITY" "3 Brewers Restaurant Micro-Brewery"|"Toronto" "Amsterdam Brewing Co."|"Toronto" "Ball Brewing Company Ltd."|"Toronto" "Big Ram Brewing Company"|"Toronto" "Black Creek Historic Brewery"|"Toronto" "Black Oak Brewing"|"Toronto" "C'est What?"|"Toronto" "Cool Beer Brewing Company"|"Toronto" "Denison's Brewing"|"Toronto" "Duggan's Brewery"|"Toronto" "Feathers"|"Toronto" "Fermentations! - Danforth"|"Toronto" "Fermentations! - Mount Pleasant"|"Toronto" "Granite Brewery & Restaurant"|"Toronto" "Labatt's Breweries of Canada"|"Toronto" "Mill Street Brew Pub"|"Toronto" "Mill Street Brewery"|"Toronto" "Molson Breweries of Canada"|"Toronto" "Molson Brewery at Air Canada Centre"|"Toronto" "Pioneer Brewery Ltd."|"Toronto" "Post-Production Bistro"|"Toronto" "Rotterdam Brewing"|"Toronto" "Steam Whistle Brewing"|"Toronto" "Strand Brasserie"|"Toronto" "Upper Canada Brewing"|"Toronto" JUST what I wanted And One Last Thing Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet. Select many tables, put them in in a single Excel worksheet If you try this in previous versions of SQL Developer it will just write the first table to the Excel file. This is one of the bugs we addressed in v3.2.1. Here is what the output Excel file looks like now: Many tables - Many workbooks in an Excel Worksheet I have a sneaky suspicion that this will be a frequently used feature going forward. Excel seems to be the cornerstone of many of our popular features. Imagine that!

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • Frameskipping in Android gameloop causing choppy sprites (Open GL ES 2.0)

    - by user22241
    I have written a simple 2d platform game for Android and am wondering how one deals with frame-skipping? Are there any alternatives? Let me explain further. So, my game loop allows for the rendering to be skipped if game updates and rendering do not fit into my fixed time-slice (16.667ms). This allows my game to run at identically perceived speeds on different devices. And this works great, things do run at the same speed. However, when the gameloop skips a render call for even one frame, the sprite glitches. And thinking about it, why wouldn't it? You're seeing a sprite move say, an average of 10 pixels every 1.6 seconds, then suddenly, there is a pause of 3.2ms, and the sprite then appears to jump 20 pixels. When this happens 3 or 4 times in close succession, the result is very ugly and not something I want in my game. Therfore, my question is how does one deal with these 'pauses' and 'jumps' - I've read every article on game loops I can find (see below) and my loops are even based off of code from these articles. The articles specifically mention frame skipping but they don't make any reference to how to deal with visual glitches that result from it. I've attempted various game-loops. My loop must have a mechanism in-place to allow rendering to be skipped to keep game-speed constant across multiple devices (or alternative, if one exists) I've tried interpolation but this doesn't eliminate this specific problem (although it looks like it may mitigate the issue slightly as when it eventually draws the sprite it 'moves it back' between the old and current positions so the 'jump' isn't so big. I've also tried a form of extrapolation which does seem to keep things smooth considerably, but I find it to be next to completely useless because it plays havoc with my collision detection (even when drawing with a 'display only' coordinate - see extrapolation-breaks-collision-detection) I've tried a loop that uses Thread.sleep when drawing / updating completes with time left over, no frame skipping in this one, again fairly smooth, but runs differently on different devices so no good. And I've tried spawning my own, third thread for logic updates, but this, was extremely messy to deal with and the performance really wasn't good. (upon reading tons of forums, most people seem to agree a 2 thread loops ( so UI and GL threads) is safer / easier). Now if I remove frame skipping, then all seems to run nice and smooth, with or without inter/extrapolation. However, this isn't an option because the game then runs at different speeds on different devices as it falls behind from not being able to render fast enough. I'm running logic at 60 Ticks per second and rendering as fast as I can. I've read, as far as I can see every article out there, I've tried the loops from My Secret Garden and Fix your timestep. I've also read: Against the grain deWITTERS Game Loop Plus various other articles on Game-loops. A lot of the others are derived from the above articles or just copied word for word. These are all great, but they don't touch on the issues I'm experiencing. I really have tried everything I can think of over the course of a year to eliminate these glitches to no avail, so any and all help would be appreciated. A couple of examples of my game loops (Code follows): From My Secret Room public void onDrawFrame(GL10 gl) { //Rre-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } And from Fix your timestep double t = 0.0; double dt2 = 0.01; double currentTime = System.currentTimeMillis()*0.001; double accumulator = 0.0; double newTime; double frameTime; @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*5)) //Allow 5 'skips' frameTime = (dt*5); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(interpolation); }

    Read the article

  • How to install iodbc in Ubuntu 12.10

    - by Hanynowsky
    I need to install the library iodbc (depends on libodbc2) in my Quantal machine. Yet there is creepy dependency problem. This was replaced somehow by unixodbc which I don't have, installed. Here is what I get when I try to install : sudo apt-get install libiodbc2-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libiodbc2-dev : Depends: libiodbc2 (= 3.52.7-2build2) but it is not going to be installed Depends: iodbc (= 3.52.7-2build2) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I can tell that iodbc conflicts with odbcinst. Yet, I cannot remove it due to the following: sudo apt-get remove odbcinst Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: akonadi-backend-mysql calligra-l10n-engb kde-l10n-engb kdevelop-l10n kdevelop-php-docs-l10n kdevelop-php-l10n libakonadi-kabc4 libakonadi-notes4 libakonadiprotocolinternals1 libboost-thread1.49.0 libdmtx0a libgpgme++2 libkcalcore4 libkdgantt2 libkholidays4 libkimap4 libkldap4 libkmbox4 libkmime4 libkolabxml0 libkpgp4 libkresources4 libksieve4 libprison0 libqgpgme1 libqrencode3 libxerces-c3.1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: akonadi-server k3b k3b-i18n katepart kde-runtime kdelibs-bin kdelibs5-plugins kdepim-runtime kdepim-strigi-plugins kdepimlibs-kio-plugins kdoctools kget kmag kmail kmix kmousetool ksystemlog kubuntu-debug-installer language-pack-kde-ar language-pack-kde-en libakonadi-calendar4 libakonadi-contact4 libakonadi-kcal4 libakonadi-kde4 libakonadi-kmime4 libcalendarsupport4 libincidenceeditorsng4 libk3b6 libkabc4 libkactivities-bin libkactivities6 libkalarmcal2 libkatepartinterfaces4 libkcal4 libkcalutils4 libkcddb4 libkde3support4 libkdepim4 libkdepimdbusinterfaces4 libkdewebkit5 libkemoticons4 libkfile4 libkgapi0 libkhtml5 libkio5 libkleo4 libkmanagesieve4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkolab0 libkonq-common libkonq5abi1 libkontactinterface4 libkparts4 libkpimidentities4 libkpimtextedit4 libkpimutils4 libkprintutils4 libksieveui4 libktexteditor4 libktnef4 libktorrent4 libkworkspace4abi2 libkxmlrpcclient4 libmailcommon4 libmailimporter4 libmailtransport4 libmessagecomposer4 libmessagecore4 libmessagelist4 libmessageviewer4 libmicroblog4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomuksync4 libnepomukutils4 libplasma3 libsoprano4 libtemplateparser4 libvirtodbc0 nepomuk-core odbcinst odbcinst1debian2 plasma-scriptengine-javascript qapt-batch soprano-daemon virtuoso-minimal 0 upgraded, 0 newly installed, 89 to remove and 0 not upgraded. After this operation, 126 MB disk space will be freed. Do you want to continue [Y/n]? Other info: Why do the "iodbc" and "libmyodbc" packages conflict with each other? The root problem is that I need to use a new feature introduced in the latest MySQL Workbench builds (DB Migration) which uses odbc for that matter. Here is what Mysql doc says about it : Linux The Migration Wizard uses iODBC as a driver manager for all of its ODBC connections in Linux. This may give you some troubles because most Linux distributions provide ODBC drivers compiled against unixODBC. This is another driver manager not supported by MySQL Workbench so you won’t be able to use those drivers unless you compile them against iODBC. Here’s what you should do. Make sure that you have iODBC installed. If you are using Debian, Ubuntu or another Debian based distro, type this command in your terminal: $ sudo apt-get install iodbc libiodbc2-dev libpq-dev libssl-dev For RPM based distros (RedHat, Fedora, etc.) type this command instead of the previous one: $ sudo yum install iodbc iodbc-dev libpqxx-devel openssl-devel Now we need to install the PostgreSQL ODBC drivers. Download the psqlODBC source tarball from here. Use the latest version available for download. As of this writing the latest version corresponds to the file psqlodbc-09.01.0200.tar.gz. Extract this tarball to a directory in your hard drive and open a terminal and cd into that directory. Type this in the terminal window: $ ./configure --with-iodbc --enable-pthreads $ make $ sudo make install Verify that you have the file psqlodbcw.so in the /usr/local/lib directory. This package seems to pose the problem probably: dpkg -s odbcinst1debian2 Package: odbcinst1debian2 Status: install ok installed Priority: optional Section: libs Installed-Size: 241 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Multi-Arch: same Source: unixodbc Version: 2.2.14p2-5ubuntu4 Replaces: unixodbc (<< 2.1.1-2) Depends: libc6 (>= 2.14), libltdl7 (>= 2.4.2), odbcinst Pre-Depends: multiarch-support Breaks: libiodbc2, libmyodbc (<< 5.1.6-2), odbc-postgresql (<< 1:09.00.0310-1.1), tdsodbc (<< 0.82-8) Conflicts: odbcinst1, odbcinst1debian1 Description: Support library for accessing odbc ini files This package contains the libodbcinst library from unixodbc, a library used by ODBC drivers for reading their configuration settings from /etc/odbc.ini and ~/.odbc.ini. . Also contained in this package are the driver setup plugins, which describe the features supported by individual ODBC drivers. Homepage: http://www.unixodbc.org/ Original-Maintainer: Steve Langasek <[email protected]> I have no broken packages:

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • Stand-Up Desk 2012 Update

    - by BuckWoody
    One of the more popular topics here on my technical blog doesn't have to do with technology, per-se - it's about the choice I made to go to a stand-up desk work environment. If you're interested in the history of those, check here: Stand-Up Desk Part One Stand-Up Desk Part Two I have made some changes and I was asked to post those here.Yes, I'm still standing - I think the experiment has worked well, so I'm continuing to work this way. I've become so used to it that I notice when I sit for a long time. If I'm flying, or driving a long way, or have long meetings, I take breaks to stand up and move around. That being said, I don't stand as much as I did. I started out by standing the entire day - which did not end well. As you can read in my second post, I found that sitting down for a few minutes each hour worked out much better. And over time I would say that I now stand about 70-80% of the day, depending on the day. Some days I don't even notice I'm standing, so I don't sit as often. Other days I find that I really tire quickly - so I sit more often. But in both cases, I stand more than I sit. In the first post you can read about how I used a simple coffee-table from Ikea to elevate my desktop to the right height. I then adjusted the height where I stand by using a small plastic square and some carpet. Over time I found this did not work as well as I'd like. The primary reason is that the front of these are at the same depth - so my knees would hit the desk or table when I sat down. Also, the desk was at a certain height, and I had to adjust, rather than the other way around.  Also, I like a lot of surface area on top of a desk - almost more of a table. Routing cables and wiring was a pain, and of course moving it was out of the question.   So I've changed what I use. I found a perfect solution for what I was looking for - industrial wire shelving: I bought one, built only half of it (for the right height I wanted) and arranged the shelves the way I wanted. I then got a 5'x4' piece of wood from Lowes, and mounted it to where the top was balanced, but had an over-hang  I could get my knees under easily.My wife sewed a piece of fake-leather for the top. This arrangement provides the following benefits: Very strong Rolls easily, wheels can lock to prevent rolling Long, wide shelves Wire-frame allows me to route any kind of wiring and other things all over the desk I plugged in my UPS and ran it's longer power-cable to the wall outlet. I then ran the router's LAN connection along that wire, and covered both with a large insulation sleeve. I then plugged in everything to the UPS, and routed all the wiring. I can now roll the desk almost anywhere in the room so that I can record, look out the window, get closer to or farther away from the door and more. I put a few boxes on the shelves as "drawers" and tidied that part up. Even my printer fits on a shelf. Laser-dog not included - some assembly required In the second post you can read about the bar-stool I purchased from Target for the desk. I cheaped-out on this one, and it proved to be a bad choice. Because I had to raise it so high, and was constantly sitting on it and then standing up, the gas-cylinder in it just gave out. So it became a very short stool that I ended up getting rid of. In the end, this one from Ikea proved to be a better choice: And so this arrangement is working out perfectly. I'm finding myself VERY productive this way. I hope these posts help you if you decide to try working at a stand-up desk. Although I was skeptical at first, I've found it to be a very healthy, easy way to code, design and especially present over a web-cam. It's natural to stand to speak when you're presenting, and it feels more energetic than sitting down to talk to others.

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • A*, Tile costs and heuristic; How to approach

    - by Kevin Toet
    I'm doing exercises in tile games and AI to improve my programming. I've written a highly unoptimised pathfinder that does the trick and a simple tile class. The first problem i ran into was that the heuristic was rounded to int's which resulted in very straight paths. Resorting a Euclidian Heuristic seemed to fixed it as opposed to use the Manhattan approach. The 2nd problem I ran into was when i tried added tile costs. I was hoping to use the value's of the flags that i set on the tiles but the value's were too small to make the pathfinder consider them a huge obstacle so i increased their value's but that breaks the flags a certain way and no paths were found anymore. So my questions, before posting the code, are: What am I doing wrong that the Manhatten heuristic isnt working? What ways can I store the tile costs? I was hoping to (ab)use the enum flags for this The path finder isnt considering the chance that no path is available, how do i check this? Any code optimisations are welcome as I'd love to improve my coding. public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map ) { return FindPath( startTile, endTile, map, TileFlags.WALKABLE ); } public static List<Tile> FindPath( Tile startTile, Tile endTile, Tile[,] map, TileFlags acceptedFlags ) { List<Tile> open = new List<Tile>(); List<Tile> closed = new List<Tile>(); open.Add( startTile ); Tile tileToCheck; do { tileToCheck = open[0]; closed.Add( tileToCheck ); open.Remove( tileToCheck ); for( int i = 0; i < tileToCheck.neighbors.Count; i++ ) { Tile tile = tileToCheck.neighbors[ i ]; //has the node been processed if( !closed.Contains( tile ) && ( tile.flags & acceptedFlags ) != 0 ) { //Not in the open list? if( !open.Contains( tile ) ) { //Set G int G = 10; G += tileToCheck.G; //Set Parent tile.parentX = tileToCheck.x; tile.parentY = tileToCheck.y; tile.G = G; //tile.H = Math.Abs(endTile.x - tile.x ) + Math.Abs( endTile.y - tile.y ) * 10; //TODO omg wtf and other incredible stories tile.H = Vector2.Distance( new Vector2( tile.x, tile.y ), new Vector2(endTile.x, endTile.y) ); tile.Cost = tile.G + tile.H + (int)tile.flags; //Calculate H; Manhattan style open.Add( tile ); } //Update the cost if it is else { int G = 10;//cost of going to non-diagonal tiles G += map[ tile.parentX, tile.parentY ].G; //If this path is shorter (G cost is lower) then change //the parent cell, G cost and F cost. if ( G < tile.G ) //if G cost is less, { tile.parentX = tileToCheck.x; //change the square's parent tile.parentY = tileToCheck.y; tile.G = G;//change the G cost tile.Cost = tile.G + tile.H + (int)tile.flags; // add terrain cost } } } } //Sort costs open = open.OrderBy( o => o.Cost).ToList(); } while( tileToCheck != endTile ); closed.Reverse(); List<Tile> validRoute = new List<Tile>(); Tile currentTile = closed[ 0 ]; validRoute.Add( currentTile ); do { //Look up the parent of the current cell. currentTile = map[ currentTile.parentX, currentTile.parentY ]; currentTile.renderer.material.color = Color.green; //Add tile to list validRoute.Add( currentTile ); } while ( currentTile != startTile ); validRoute.Reverse(); return validRoute; } And my Tile class: [Flags] public enum TileFlags: int { NONE = 0, DIRT = 1, STONE = 2, WATER = 4, BUILDING = 8, //handy WALKABLE = DIRT | STONE | NONE, endofenum } public class Tile : MonoBehaviour { //Tile Properties public int x, y; public TileFlags flags = TileFlags.DIRT; public Transform cachedTransform; //A* properties public int parentX, parentY; public int G; public float Cost; public float H; public List<Tile> neighbors = new List<Tile>(); void Awake() { cachedTransform = transform; } }

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • "Imprinting" as a language feature?

    - by MKO
    Idea I had this idea for a language feature that I think would be useful, does anyone know of a language that implements something like this? The idea is that besides inheritance a class can also use something called "imprinting" (for lack of better term). A class can imprint one or several (non-abstract) classes. When a class imprints another class it gets all it's properties and all it's methods. It's like the class storing an instance of the imprinted class and redirecting it's methods/properties to it. A class that imprints another class therefore by definition also implements all it's interfaces and it's abstract class. So what's the point? Well, inheritance and polymorphism is hard to get right. Often composition gives far more flexibility. Multiple inheritance offers a slew of different problems without much benefits (IMO). I often write adapter classes (in C#) by implementing some interface and passing along the actual methods/properties to an encapsulated object. The downside to that approach is that if the interface changes the class breaks. You also you have to put in a lot of code that does nothing but pass things along to the encapsulated object. A classic example is that you have some class that implements IEnumerable or IList and contains an internal class it uses. With this technique things would be much easier Example (c#) [imprint List<Person> as peopleList] public class People : PersonBase { public void SomeMethod() { DoSomething(this.Count); //Count is from List } } //Now People can be treated as an List<Person> People people = new People(); foreach(Person person in people) { ... } peopleList is an alias/variablename (of your choice)used internally to alias the instance but can be skipped if not needed. One thing that's useful is to override an imprinted method, that could be achieved with the ordinary override syntax public override void Add(Person person) { DoSomething(); personList.Add(person); } note that the above is functional equivalent (and could be rewritten by the compiler) to: public class People : PersonBase , IList<Person> { private List<Person> personList = new List<Person>(); public override void Add(object obj) { this.personList.Add(obj) } public override int IndexOf(object obj) { return personList.IndexOf(obj) } //etc etc for each signature in the interface } only if IList changes your class will break. IList won't change but an interface that you, someone in your team, or a thirdparty has designed might just change. Also this saves you writing a whole lot of code for some interfaces/abstract classes. Caveats There's a couple of gotchas. First we, syntax must be added to call the imprinted classes's constructors from the imprinting class constructor. Also, what happends if a class imprints two classes which have the same method? In that case the compiler would detect it and force the class to define an override of that method (where you could chose if you wanted to call either imprinted class or both) So what do you think, would it be useful, any caveats? It seems it would be pretty straightforward to implement something like that in the C# language but I might be missing something :) Sidenote - Why is this different from multiple inheritance Ok, so some people have asked about this. Why is this different from multiple inheritance and why not multiple inheritance. In C# methods are either virtual or not. Say that we have ClassB who inherits from ClassA. ClassA has the methods MethodA and MethodB. ClassB overrides MethodA but not MethodB. Now say that MethodB has a call to MethodA. if MethodA is virtual it will call the implementation that ClassB has, if not it will use the base class, ClassA's MethodA and you'll end up wondering why your class doesn't work as it should. By the terminology sofar you might already confused. So what happens if ClassB inherits both from ClassA and another ClassC. I bet both programmers and compilers will be scratching their heads. The benefit of this approach IMO is that the imprinting classes are totally encapsulated and need not be designed with multiple inheritance in mind. You can basically imprint anything.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • C# remote web request certificate error

    - by Ben
    Hi. I am currently integrating a payment gateway into an application. Following a successful transaction the remote payment gateway posts details of the transaction back to my application (ITN). It posts to a HttpHandler that is used to read and validate the data. Part of the validation performed is a POST made by the handler to a validation service provided by the payment gateway. This effectively posts some of the original form values received back to the payment gateway to ensure they are valid. The url that I am posting back to is: "https://sandbox.payfast.co.za/eng/query/validate" and the code I am using: /// <summary> /// Posts the data back to the payment processor to validate the data received /// </summary> public static bool ValidateITNRequestData(NameValueCollection formVariables) { bool isValid = true; try { using (WebClient client = new WebClient()) { string validateUrl = (UseSandBox) ? SandboxValidateUrl : LiveValidateUrl; byte[] responseArray = client.UploadValues(validateUrl, "POST", formVariables); // get the resposne and replace the line breaks with spaces string result = Encoding.ASCII.GetString(responseArray); result = result.Replace("\r\n", " ").Replace("\r", "").Replace("\n", " "); if (result == null || !result.StartsWith("VALID")) { isValid = false; LogManager.InsertLog(LogTypeEnum.OrderError, "PayFast ITN validation failed", "The validation response was not valid."); } } } catch (Exception ex) { LogManager.InsertLog(LogTypeEnum.Unknown, "Unable to validate ITN data. Unknown exception", ex); isValid = false; } return isValid; } However, on calling WebClient.UploadValues the following exception is raised: System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken message, AsyncProtocolRequest asyncRequest, Exception exception) at For the sake of brevity I haven't listed the full call stack (I can do if anyone thinks it will help). The remote certificate does appear to be valid. To get around the problem I did try adding a new RemoteCertificateValidationCallback that always returned true but just ended up getting the following exception: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_ServerCertificateValidationCallback(RemoteCertificateValidationCallback value) at NopSolutions.NopCommerce.Payment.Methods.PayFast.PayFastPaymentProcessor.ValidateITNRequestData(NameValueCollection formVariables) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer So I am not sure this will work in medium trust? Any help would be much appreciated. Thanks Ben

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >