Search Results

Search found 14852 results on 595 pages for 'email rules'.

Page 214/595 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • This is the End of Business as Usual...

    - by Michael Snow
    This week, we'll be hosting our last Social Business Thought Leader Series Webcast for 2012. Our featured guest this week will be Brian Solis of Altimeter Group. As we've been going through the preparations for Brian's webcast, it became very clear that an hour's time is barely scraping the surface of the depth of Brian's insights and analysis. Accordingly, in the spirit of sharing Brian's perspective for all of our readers, we'll be featuring guest posts all this week pulled from Brian's larger collection of blog postings on his own website. If you like what you've read here this week, we highly recommend digging deeper into his tome of wisdom. Guest Post by Brian Solis, Analyst, Altimeter Group as originally featured on his site with the minor change of the video addition at the beginning of the post. This is the End of Business as Usual and the Beginning of a New Era of Relevance - Brian Solis, Principal Analyst, Altimeter Group The Times They Are A-Changin’ Come gather ’round people Wherever you roam And admit that the waters Around you have grown And accept it that soon You’ll be drenched to the bone If your time to you Is worth savin’ Then you better start swimmin’ Or you’ll sink like a stone For the times they are a-changin’. - Bob Dylan I’m sure you are wondering why I chose lyrics to open this article. If you skimmed through them, stop here for a moment. Go back through the Dylan’s words and take your time. Carefully read, and feel, what it is he’s saying and savor the moment to connect the meaning of his words to the challenges you face today. His message is as important and true today as it was when they were first written in 1964. The tide is indeed once again turning. And even though the 60s now live in the history books, right here, right now, Dylan is telling us once again that this is our time to not only sink or swim, but to do something amazing. This is your time. This is our time. But, these times are different and what comes next is difficult to grasp. How people communicate. How people learn and share. How people make decisions. Everything is different now. Think about this…you’re reading this article because it was sent to you via email. Yet more people spend their online time in social networks than they do in email. Duh. According to Nielsen, of the total time spent online 22.5% are connecting and communicating in social networks. To put that in perspective, the time spent in the likes of Facebook, Twitter, and Youtube is greater than online gaming at 9.8%, email at 7.6% and search at 4%. Imagine for a moment if you and I were connected to one another in Facebook, which just so happens to be the largest social network in the world. How big? Well, Facebook is the size today of the entire Internet in 2004. There are over 1 billion people friending, Liking, commenting, sharing, and engaging in Facebook…that’s roughly 12% of the world’s population. Twitter has over 200 million users. Ever hear of tumblr? More time is spent on this popular microblogging community than Twitter. The point is that the landscape for communication and all that’s affected by human interaction is profoundly different than how you and I learned, shared or talked to one another yesterday. This transformation is only becoming more pervasive and, it’s not going back. Survival of the Fitting But social media is just one of the channels we can use to reach people. I must be honest. I’m as much a part of tomorrow as I am of yesteryear. It’s why I spend all of my time researching the evolution of media and its impact on business and culture. Because of you, I share everything I learn in newsletters, emails, blogs, Youtube videos, and also traditional books. I’m dedicated to helping everyone not only understand, but grasp the change that’s before you. Technologies such as social, mobile, virtual, augmented, et al compel us adapt our story and value proposition and extend our reach to be part of communities we don’t realize exist. The people who will keep you in business or running tomorrow are the very people you’re not reaching today. Before you continue to read on, allow me to clarify my point of view. My inspiration for writing this is to help you augment, not necessarily replace, the programs you’re running today. We must still reach those whom matter to us in the ways they prefer to be engaged. To reach what I call the connected consumer of Geneeration-C we must too reach them in the ways they wish to be engaged. And in all of my work, how they connect, talk to one another, influence others, and make decisions are not at all like the traditional consumers of the past. Nor are they merely the kids…the Millennial. Connected consumers are representative across every age group and demographic. As you can see, use of social networks, media sharing sites, microblogs, blogs, etc. equally span across Gen Y, Gen X, and Baby Boomers. The DNA of connected customers is indiscriminant of age or any other demographic for that matter. This is more about psychographics, the linkage of people through common interests (than it is their age, gender, education, nationality or level of income. Once someone is introduced to the marvels of connectedness, the sensation becomes a contagion. It touches and affects everyone. And, that’s why this isn’t going anywhere but normalcy. Social networking isn’t just about telling people what you’re doing. Nor is it just about generic, meaningless conversation. Today’s connected consumer is incredibly influential. They’re connected to hundreds and even thousands of other like-minded people. What they experiences, what they support, it’s shared throughout these networks and as information travels, it shapes and steers impressions, decisions, and experiences of others. For example, if we revisit the Nielsen research, we get an idea of just how big this is becoming. 75% spend heavily on music. How does that translate to the arts? I’d imagine the number is equally impressive. If 53% follow their favorite brand or organization, imagine what’s possible. Just like this email list that connects us, connections in social networks are powerful. The difference is however, that people spend more time in social networks than they do in email. Everything begins with an understanding of the “5 W’s and H.E.” – Who, What, When, Where, How, and to What Extent? The data that comes back tells you which networks are important to the people you’re trying to reach, how they connect, what they share, what they value, and how to connect with them. From there, your next steps are to create a community strategy that extends your mission, vision, and value and it align it with the interests, behavior, and values of those you wish to reach and galvanize. To help, I’ve prepared an action list for you, otherwise known as the 10 Steps Toward New Relevance: 1. Answer why you should engage in social networks and why anyone would want to engage with you 2. Observe what brings them together and define how you can add value to the conversation 3. Identify the influential voices that matter to your world, recognize what’s important to them, and find a way to start a dialogue that can foster a meaningful and mutually beneficial relationship 4. Study the best practices of not just organizations like yours, but also those who are successfully reaching the type of people you’re trying to reach – it’s benching marking against competitors and benchmarking against undefined opportunities 5. Translate all you’ve learned into a convincing presentation written to demonstrate tangible opportunity to your executive board, make the case through numbers, trends, data, insights – understanding they have no idea what’s going on out there and you are both the scout and the navigator (start with a recommended pilot so everyone can learn together) 6. Listen to what they’re saying and develop a process to learn from activity and adapt to interests and steer engagement based on insights 7. Recognize how they use social media and innovate based on what you observe to captivate their attention 8. Align your objectives with their objectives. If you’re unsure of what they’re looking for…ask 9. Invest in the development of content, engagement 10. Build a community, invest in values, spark meaningful dialogue, and offer tangible value…the kind of value they can’t get anywhere else. Take advantage of the medium and the opportunity! The reality is that we live and compete in a perpetual era of Digital Darwinism, the evolution of consumer behavior when society and technology evolve faster than our ability to adapt. This is why it’s our time to alter our course. We must connect with those who are defining the future of engagement, commerce, business, and how the arts are appreciated and supported. Even though it is the end of business as usual, it is the beginning of a new age of opportunity. The consumer revolution is already underway, and the question is: How do you better understand the role you play in this production as a connected or social consumer as well as business professional? Again, this is your time to define a new era of engagement and relevance. Originally written for The National Arts Marketing Project Connect with Brian via: Twitter | LinkedIn | Facebook | Google+ --- Note from Michael: If you really like this post above - check out Brian's TEDTalk and his thought process for preparing it in this post: 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} http://www.briansolis.com/2012/10/tedtalk-reinventing-consumer-capitalism-screw-business-as-usual/

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • HTML5 and Visual Studio 2010

    - by Harish Ranganathan
    All of us work with Visual Studio (or the free Visual Web Developer Express Edition) for developing web applications targeting ASP.NET / ASP.NET MVC or Silverlight etc.,  Over the years, Visual Studio has grown to a great extent.  From being a simple limited functionality tool in VS.NET 2002 to the multi-faceted, MEF driven Visual Studio 2010, it has come a long way.  And as much as Visual Studio supports rapid web development by generating HTML mark up, it also added intellisense for some of the HTML specifications that one has otherwise monotonously type every time.  Ex.- In Visual Studio 2010, one can just type the angular bracket “<” and then the first keyword “h” or “x” for html or xhtml respectively and then press tab twice and it would render the entire markup required for XHTML or HTML 1.0/1.1 strict/transitional and the fully qualified W3C URL. The same holds good for specifying HTML type declaration.  Now, the difference between HTML and XHTML has been discussed in detail already, though, if you are interested to know, you can read it from http://www.w3schools.com/xhtml/xhtml_html.asp But, the industry trend or the buzz around is HTML5.  With browsers like IE9 Beta, Google Chrome, Firefox 4 etc., supporting HTML5 standards big time, everyone wants to start developing HTML5 based websites. VS developers (like me) often get the question around when would VS start supporting HTML5.  VS 2010 was released last year and HTML5 is still specifications under development.  Clearly, with the timelines we started developing Visual Studio (way back in 2008), HTML5 specs were almost non-existent.  Even today, the HTML5 body recommends not to fully depend on the entire mark up set as they are still under development specs and might change in the future. However, with Visual Studio 2010 SP1 beta, there is quite a bit of support for HTML5 based web development.  In fact, one of my colleagues pointed out that SP1 beta’s major enhancement is its ability to support HTML5 tags and even add server mode to them. Lets look at the existing validation schema available in Visual Studio (Tools – Options – Validation) This is before installing Visual Studio 2010 SP1 Beta.  Clearly, the validation options are restricted to HTML 4.01 and XHTML 1.1 transitional and below. Also, lets consider using some of the new HTML5 input type elements.  (I found out this, just today from my friend, also an, ASP.NET team member) <input type=”email”> is one of the new input type elements according to the HTML5 specification.  Now, this works well if you type it as is  in Visual Studio and the page renders without any issue (since the default behaviour is, if there is an “undefined” type specified to input tag, it would fall back on the default mode, which is text. The moment you add <input type=”email” runat=”server” >, you get an error Naturally you don’t get intellisense support as well for these new tags.  Once you install Visual Studio 2010 Service Pack 1 Beta from here (it takes a while so you need to be patient for the installation to complete), you will start getting additional Validation templates for HTML5, as below:- Once you set this, you can start using HTML5 elements in your web page without getting errors/warnings.  Look at the screen shot below, for the new “video” tag which is showing up in intellisense (video is a part of the new HTML5 specifications)     note that, you still need to hook up the <!DOCTYPE html /> on the top manually as it doesn’t change automatically  (from the default XHTML 1.0 strict) when you create a new page. Also, the new input type tags in HTML5 are also supported One, can also use the <asp:TextBox type=”email” which would in turn generate the <input type=”email”> markup when the page is rendered.  In fact, as of SP1 beta, this is the only way to put the new input type tags with the runat=”server” attribute (otherwise you will get the parser error mentioned above.  This issue would be fixed by the final release of SP1 beta) Going further, there may be more support for having server tags for some of the common HTML5 elements, but this is work in progress currently. So, other than not having runat=”server” support for the new HTML5  input tags, you can pretty much build and target HTML5 websites with Visual Studio 2010 SP1 Beta, today.  For those who are running Visual Studio 2008, you also have the “HTML5 intellisense for Visual Studio 2010 and 2008” available for download, from http://visualstudiogallery.msdn.microsoft.com/d771cbc8-d60a-40b0-a1d8-f19fc393127d/ Note that, if you are running Visual Studio 2010, the recommended approach is to install the SP1 beta which would be the way forward for HTML5 support in Visual Studio. Of course, you need to test these on a browser supporting HTML5 such as IE9 Beta or Chrome or FireFox 4.  You can download IE9 Beta from here You can also follow the Visual Web Developer Team Blog for more updates on the stuff they are building. Cheers !!!

    Read the article

  • SQL SERVER – DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012

    - by pinaldave
    Have you ever faced situation where something does not work and when you try to go and fix it – you like fixing it and started to appreciate the breaking changes. Well, this is exactly I felt yesterday. Before I begin my story of yesterday I want to state it candidly that I do not encourage anybody to use * in the SELECT statement. One of the my DBA friend who always used my performance tuning script yesterday sent me email asking following question - “Every time I want to retrieve OS related information in SQL Server – I used DMV sys.dm_os_sys_info. I just upgraded my SQL Server edition from 2008 R2 to SQL Server 2012 RC0 and it suddenly stopped working. Well, this is not the production server so the issue is not big yet but eventually I need to resolve this error. Any suggestion?” The funny thing was original email was very long but it did not talk about what is the exact error beside the query is not working. I think this is the disadvantage of being too friendly on email sometime. Well, never the less, I quickly looked at the DMV on my SQL Server 2008 R2 and SQL Server 2012 RC0 version. To my surprise I found out that there were few columns which are renamed in SQL Server 2012 RC0. Usually when people see breaking changes they do not like it but when I see these changes I was happy as new names were meaningful and additionally their new conversion is much more practical and useful. Here are the columns previous names - Previous Column Name New Column Name physical_memory_in_bytes physical_memory_kb bpool_commit_target committed_target_kb bpool_visible visible_target_kb virtual_memory_in_bytes virtual_memory_kb bpool_commited committed_kb If you read it carefully you will notice that new columns now display few results in the KB whereas earlier result was in bytes. When I see the results in bytes I always get confused as I could not guess what exactly it will convert to. I like to see results in kb and I am glad that new columns are now displaying the results in the kb. I sent the details of the new columns to my friend and ask him to check the columns used in application. From my comment he quickly realized why he was facing error and fixed it immediately. Overall – all was well at the end and I learned something new. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • How to speed up this simple mysql query?

    - by Jim Thio
    The query is simple: SELECT TB.ID, TB.Latitude, TB.Longitude, 111151.29341326*SQRT(pow(-6.185-TB.Latitude,2)+pow(106.773-TB.Longitude,2)*cos(-6.185*0.017453292519943)*cos(TB.Latitude*0.017453292519943)) AS Distance FROM `tablebusiness` AS TB WHERE -6.2767668133836 < TB.Latitude AND TB.Latitude < -6.0932331866164 AND FoursquarePeopleCount >5 AND 106.68123318662 < TB.Longitude AND TB.Longitude <106.86476681338 ORDER BY Distance See, we just look at all business within a rectangle. 1.6 million rows. Within that small rectangle there are only 67,565 businesses. The structure of the table is 1 ID varchar(250) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 2 Email varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 3 InBuildingAddress varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 4 Price int(10) Yes NULL Change Change Drop Drop More Show more actions 5 Street varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 6 Title varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 7 Website varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 8 Zip varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 9 Rating Star double Yes NULL Change Change Drop Drop More Show more actions 10 Rating Weight double Yes NULL Change Change Drop Drop More Show more actions 11 Latitude double Yes NULL Change Change Drop Drop More Show more actions 12 Longitude double Yes NULL Change Change Drop Drop More Show more actions 13 Building varchar(200) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 14 City varchar(100) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 15 OpeningHour varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 16 TimeStamp timestamp on update CURRENT_TIMESTAMP No CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Change Change Drop Drop More Show more actions 17 CountViews int(11) Yes NULL Change Change Drop Drop More Show more actions The indexes are: Edit Edit Drop Drop PRIMARY BTREE Yes No ID 1965990 A Edit Edit Drop Drop City BTREE No No City 131066 A Edit Edit Drop Drop Building BTREE No No Building 21 A YES Edit Edit Drop Drop OpeningHour BTREE No No OpeningHour (255) 21 A YES Edit Edit Drop Drop Email BTREE No No Email (255) 21 A YES Edit Edit Drop Drop InBuildingAddress BTREE No No InBuildingAddress (255) 21 A YES Edit Edit Drop Drop Price BTREE No No Price 21 A YES Edit Edit Drop Drop Street BTREE No No Street (255) 982995 A YES Edit Edit Drop Drop Title BTREE No No Title (255) 1965990 A YES Edit Edit Drop Drop Website BTREE No No Website (255) 491497 A YES Edit Edit Drop Drop Zip BTREE No No Zip (255) 178726 A YES Edit Edit Drop Drop Rating Star BTREE No No Rating Star 21 A YES Edit Edit Drop Drop Rating Weight BTREE No No Rating Weight 21 A YES Edit Edit Drop Drop Latitude BTREE No No Latitude 1965990 A YES Edit Edit Drop Drop Longitude BTREE No No Longitude 1965990 A YES The query took forever. I think there has to be something wrong there. Showing rows 0 - 29 ( 67,565 total, Query took 12.4767 sec)

    Read the article

  • Winners of Pete Brown's "Silverlight 5 In Action" Books

    - by Dave Campbell
    It's always a double-edged sword when I get to this point in a give-away... I want to give everyone something, but a deal is a deal :) It's also only through the benevolence of the folks at Manning Press that I can even do this, so thank you! The Winners Getting right to it, the winners are: Jaganadh G Stephen Owens Jan Hannemann Notice there are 3 names, not 2... I was told late last week to pick a 3rd name, so thanks again Manning! I've already received email from my contact, and they've been waiting for me to send them the email. You should be hearing from them shortly I think. For everyone else, keep your eyes on my blog... as I told Manning, I like giving away other people's stuff :) Have a great day, and if you're anywhere near Phoenix and interested in Silverlight, I'll see you tomorrow at the Scott Gu Event, and Stay in the 'Light!

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Siebel Open UI Training for Oracle EMEA CRM Partners - Free - Utrecht NL- January 22/23 2012

    - by Richard Lefebvre
    Have you heard about Siebel Open UI? It is the new, state-of-the-art User Interface for Siebel, offering an amazing User Experience on any browser. Oracle is planning a free of charge 2 days training, delivered by Oracle Product Development specialists, in Utrecht (NL) on January 22&23 2012. Seats are very limited. If you or your colleagues are interested to apply for one, please send an eMail to [email protected] with the contact details of the individuals who you would like to nomminate. If you would like to know more about Siebl Open UI before applying, please send an eMail to [email protected] to receive a short PPT deck featuring a short Siebel Open UI description, its benefits for (System Integrators) partners, and the detailed agenda.  Selected Participants will then be invited to register via the Oracle APEX system.

    Read the article

  • PHP Mail() to Gmail = Spam

    - by grantw
    Recently Gmail has started marking emails sent directly from my server (using php mail()) as spam and I'm having problems trying to find the issue. If I send an exact copy of the same email from my email client it goes to the Gmail inbox. The emails are plain text, around 7 lines long and contain a URL link in plain text. As the emails sent from my client are getting through fine I'm thinking that the content isn't the issue. It would be greatly appreciated if someone could take a look at the the following headers and give me some advice why the email from the server is being marked as spam. Email from Server: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101784qeb; Thu, 15 Nov 2012 14:58:52 -0800 (PST) Received: by 10.60.27.166 with SMTP id u6mr2296595oeg.86.1353020331940; Thu, 15 Nov 2012 14:58:51 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id df4si17005013obc.50.2012.11.15.14.58.51 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:58:51 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Date:Message-Id:Content-Type:Reply-to:From:Subject:To; bh=2RJ9jsEaGcdcgJ1HMJgQG8QNvWevySWXIFRDqdY7EAM=; b=mGebBVOkyUhv94ONL3EabXeTgVznsT1VAwPdVvpOGDdjBtN1FabnuFi8sWbf5KEg5BUJ/h8fQ+9/2nrj+jbtoVLvKXI6L53HOXPjl7atCX9e41GkrOTAPw5ZFp+1lDbZ; Received: from grantw by dom.domainbrokerage.co.uk with local (Exim 4.80) (envelope-from [email protected]) id 1TZ8OZ-0008qC-Gy for [email protected]; Thu, 15 Nov 2012 22:58:51 +0000 To: [email protected] Subject: Offer Accepted X-PHP-Script: www.domainbrokerage.co.uk/admin.php for 95.172.231.27 From: My Name [email protected] Reply-to: [email protected] Content-Type: text/plain; charset=Windows-1251 Message-Id: [email protected] Date: Thu, 15 Nov 2012 22:58:51 +0000 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [500 500] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: grantw/from_h Email from client: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101495qeb; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received: by 10.182.197.8 with SMTP id iq8mr2351185obc.66.1353020089244; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id ab5si17000486obc.44.2012.11.15.14.54.48 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Content-Transfer-Encoding:Content-Type:Subject:To:MIME-Version:From:Date:Message-ID; bh=bKNjm+yTFZQ7HUjO3lKPp9HosUBfFxv9+oqV+NuIkdU=; b=j0T2XNBuENSFG85QWeRdJ2MUgW2BvGROBNL3zvjwOLoFeyHRU3B4M+lt6m1X+OLHfJJqcoR0+GS9p/TWn4jylKCF13xozAOc6ewZ3/4Xj/YUDXuHkzmCMiNxVcGETD7l; Received: from w-27.cust-7941.ip.static.uno.uk.net ([95.172.231.27]:1450 helo=[127.0.0.1]) by dom.domainbrokerage.co.uk with esmtpa (Exim 4.80) (envelope-from [email protected]) id 1TZ8Ke-0001XH-7p for [email protected]; Thu, 15 Nov 2012 22:54:48 +0000 Message-ID: [email protected] Date: Thu, 15 Nov 2012 22:54:50 +0000 From: My Name [email protected] User-Agent: Postbox 3.0.6 (Windows/20121031) MIME-Version: 1.0 To: [email protected] Subject: Offer Accepted Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: [email protected]

    Read the article

  • Analiytics: Can I set a goal on multiple events?

    - by David Parks
    We have a popup dialogue that requests users email address or facebook login. The page behind the popup loads, so a page view is counted. We want to measure: How many users ignored the popup completely How many users engaged the popup, but don't complete the process (we trigger an event when the user performs actions defined as "engaging") How many users completed the popup Bounce rates aren't telling because some users won't receive the popup. We are basically triggering events "PopupDisplayed" "PopupEngaged" and "PopupComplete", with labels to differentiate between email and facebook. But I don't think I can set goals to count "Users who received 'PopupDisplayed' AND 'PopupComplete'" events, so I can count how many users both saw the popup and completed it.

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • The Politics of Junk Filtering

    - by mikef
    If the national postal service, such as the Royal Mail in the UK, were to go through your letters and throw away all the stuff it considered to be junk instead of delivering it to you, you might be rather pleased until you discovered that it took a too liberal decision about what was junk. Catalogs you'd asked for? Junk! Requests from charities? Who needs them! Parcels from competing carriers? Toss them away! The possibility for abuse for an agency that was in a monopolistic position is just too scary to tolerate. After all, the postal service could charge 'consultancy fees' to any sender who wanted to guarantee that his stuff got delivered, or they could even farm this out to other companies. Because Microsoft Outlook is just about the only email client used by the international business community in the west, its' SPAM filter is the final arbiter as to what gets read. My Outlook 2007, set to the default settings, junks all the perfectly innocent email newsletters that I subscribe to. Whereas Google Mail, Yahoo, and LIVE are all pretty accurate in detecting spam, Outlook makes all sorts of silly mistakes. The documentation speaks techno-babble about 'advanced heuristics', but the result boils down to an inaccurate mess. The more that Microsoft fiddles with it, the stickier the mess. To make matters worse, it still lets through obvious spam. The filter is occasionally updated along with other automatic 'security' updates you opt for automatic updates. As an editor for a popular online publication that provides a newsletter service, this is an obvious source of frustration. We follow all the best-practices we know about. We ensure that it is a trivial task to opt out of receiving it. We format the newsletter to the requirements of the Service Providers. We follow up, and resolve, every complaint. As a result, it gets delivered. It is galling to discover that, after all that effort, Outlook then often judges the contents to be junk on a whim, so you don't get to see it. A few days ago, Microsoft published the PST file format specification, under pressure from a European Union interoperability investigation by ECIS (the European Committee for Interoperable Systems). The objective was that other applications could then access existing PST files so as to migrate from existing Outlook installations to other solutions. Joaquín Almunia, the current competition commissioner, should now turn his attention to the more subtle problems of Microsoft Outlook. The Junk problem seems to have come from clumsy implementation of client-side spam filtering rather than from deliberate exploitation of a monopoly on the desktop email client for businesses, but it is a growing problem nonetheless. Cheers, Michael Francis

    Read the article

  • Windows Azure Mobile Services: New support for iOS apps, Facebook/Twitter/Google identity, Emails, SMS, Blobs, Service Bus and more

    - by ScottGu
    A few weeks ago I blogged about Windows Azure Mobile Services - a new capability in Windows Azure that makes it incredibly easy to connect your client and mobile applications to a scalable cloud backend. Earlier today we delivered a number of great improvements to Windows Azure Mobile Services.  New features include: iOS support – enabling you to connect iPhone and iPad apps to Mobile Services Facebook, Twitter, and Google authentication support with Mobile Services Blob, Table, Queue, and Service Bus support from within your Mobile Service Sending emails from your Mobile Service (in partnership with SendGrid) Sending SMS messages from your Mobile Service (in partnership with Twilio) Ability to deploy mobile services in the West US region All of these improvements are now live in production and available to start using immediately. Below are more details on them: iOS Support This week we delivered initial support for connecting iOS based devices (including iPhones and iPads) to Windows Azure Mobile Services.  Like the rest of our Windows Azure SDK, we are delivering the native iOS libraries to enable this under an open source (Apache 2.0) license on GitHub.  We’re excited to get your feedback on this new library through our forum and GitHub issues list, and we welcome contributions to the SDK. To create a new iOS app or connect an existing iOS app to your Mobile Service, simply select the “iOS” tab within the Quick Start view of a Mobile Service within the Windows Azure Portal – and then follow either the “Create a new iOS app” or “Connect to an existing iOS app” link below it: Clicking either of these links will expand and display step-by-step instructions for how to build an iOS application that connects with your Mobile Service: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple iOS “Todo List” app that stores data in Windows Azure.  Then follow the below tutorials to explore how to use the iOS client libraries to store data and authenticate users. Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS Facebook, Twitter, and Google Authentication Support Our initial preview of Mobile Services supported the ability to authenticate users of mobile apps using Microsoft Accounts (formerly called Windows Live ID accounts).  This week we are adding the ability to also authenticate users using Facebook, Twitter, and Google credentials.  These are now supported with both Windows 8 apps as well as iOS apps (and a single app can support multiple forms of identity simultaneously – so you can offer your users a choice of how to login). The below tutorials walkthrough how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google The tutorials above walkthrough how to obtain a client ID and a secret key from the identity provider. You can then click on the “Identity” tab of your Mobile Service (within the Windows Azure Portal) and save these values to enable server-side authentication with your Mobile Service: You can then write code within your client or mobile app to authenticate your users to the Mobile Service.  For example, below is the code you would write to have them login to the Mobile Service using their Facebook credentials: Windows Store App (using C#): var user = await App.MobileService                     .LoginAsync(MobileServiceAuthenticationProvider.Facebook); iOS app (using Objective C): UINavigationController *controller = [self.todoService.client     loginViewControllerWithProvider:@"facebook"     completion:^(MSUser *user, NSError *error) {        //... }]; Learn more about authenticating Mobile Services using Microsoft Account, Facebook, Twitter, and Google from these tutorials: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS Using Windows Azure Blob, Tables and ServiceBus with your Mobile Services Mobile Services provide a simple but powerful way to add server logic using server scripts. These scripts are associated with the individual CRUD operations on your mobile service’s tables. Server scripts are great for data validation, custom authorization logic (e.g. does this user participate in this game session), augmenting CRUD operations, sending push notifications, and other similar scenarios.   Server scripts are written in JavaScript and are executed in a secure server-side scripting environment built using Node.js.  You can edit these scripts and save them on the server directly within the Windows Azure Portal: In this week’s release we have added the ability to work with other Windows Azure services from your Mobile Service server scripts.  This is supported using the existing “azure” module within the Windows Azure SDK for Node.js.  For example, the below code could be used in a Mobile Service script to obtain a reference to a Windows Azure Table (after which you could query it or insert data into it):     var azure = require('azure');     var tableService = azure.createTableService("<< account name >>",                                                 "<< access key >>"); Follow the tutorials on the Windows Azure Node.js dev center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. Sending emails from your Mobile Service In this week’s release we have also added the ability to easily send emails from your Mobile Service, building on our partnership with SendGrid. Whether you want to add a welcome email upon successful user registration, or make your app alert you of certain usage activities, you can do this now by sending email from Mobile Services server scripts. To get started, sign up for SendGrid account at http://sendgrid.com . Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid. To sign-up for this offer, or get more information, please visit http://www.sendgrid.com/azure.html . One you signed up, you can add the following script to your Mobile Service server scripts to send email via SendGrid service:     var sendgrid = new SendGrid('<< account name >>', '<< password >>');       sendgrid.send({         to: '<< enter email address here >>',         from: '<< enter from address here >>',         subject: 'New to-do item',         text: 'A new to-do was added: ' + item.text     }, function (success, message) {         if (!success) {             console.error(message);         }     }); Follow the Send email from Mobile Services with SendGrid tutorial to learn more. Sending SMS messages from your Mobile Service SMS is a key communication medium for mobile apps - it comes in handy if you want your app to send users a confirmation code during registration, allow your users to invite their friends to install your app or reach out to mobile users without a smartphone. Using Mobile Service server scripts and Twilio’s REST API, you can now easily send SMS messages to your app.  To get started, sign up for Twilio account. Windows Azure customers receive 1000 free text messages when using Twilio and Windows Azure together. Once signed up, you can add the following to your Mobile Service server scripts to send SMS messages:     var httpRequest = require('request');     var account_sid = "<< account SID >>";     var auth_token = "<< auth token >>";       // Create the request body     var body = "From=" + from + "&To=" + to + "&Body=" + message;       // Make the HTTP request to Twilio     httpRequest.post({         url: "https://" + account_sid + ":" + auth_token +              "@api.twilio.com/2010-04-01/Accounts/" + account_sid + "/SMS/Messages.json",         headers: { 'content-type': 'application/x-www-form-urlencoded' },         body: body     }, function (err, resp, body) {         console.log(body);     }); I’m excited to be speaking at the TwilioCon conference this week, and will be showcasing some of the cool scenarios you can now enable with Twilio and Windows Azure Mobile Services. Mobile Services availability in West US region Our initial preview of Windows Azure Mobile Services was only supported in the US East region of Windows Azure.  As with every Windows Azure service, overtime we will extend Mobile Services to all Windows Azure regions. With this week’s preview update we’ve added support so that you can now create your Mobile Service in the West US region as well: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. We’ll have even more new features and enhancements coming later this week – including .NET 4.5 support for Windows Azure Web Sites.  Keep an eye out on my blog for details as new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Lenovo X220 right click does not work with ubuntu 12.04

    - by fulop
    I am unable to right click with my new X220 Lenovo sub-notebook. I have read several workaround but even not know which one would help me. Can someone help me to find the solution or workaround? dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package xserver-xorg-input-synaptics dpkg-buildpackage: source version 1.6.2-1ubuntu1~precise2 dpkg-buildpackage: source changed by Timo Aaltonen <[email protected]> dpkg-buildpackage: host architecture amd64 dpkg-source --before-build xserver-xorg-input-synaptics-1.6.2 fakeroot debian/rules clean dh clean --with quilt,autoreconf,xsf --builddirectory=build/ dh_testdir -O--builddirectory=build/ dh_auto_clean -O--builddirectory=build/ dh_quilt_unpatch -O--builddirectory=build/ Removing patch 131_reset-num_active_touches-on-deviceoff.patch Restoring src/synaptics.c Removing patch 130_dont_enable_rightbutton_area.patch Restoring conf/50-synaptics.conf Removing patch 129_disable_three_touch_tap.patch Restoring src/synaptics.c Removing patch 128_disable_three_click_action.patch Restoring src/synaptics.c Removing patch 126_ubuntu_xi22.patch Restoring configure.ac Removing patch 125_option_rec_revert.patch Restoring test/fake-symbols.h Restoring test/fake-symbols.c Removing patch 124_syndaemon_events.patch Restoring tools/syndaemon.c Removing patch 118_quell_error_msg.patch Restoring tools/synclient.c Restoring tools/syndaemon.c Removing patch 115_evdev_only.patch Restoring conf/50-synaptics.conf Removing patch 106_always_enable_vert_edge_scroll.patch Restoring src/synaptics.c Removing patch 104_always_enable_tapping.patch Restoring src/synaptics.c Removing patch 103_enable_cornertapping.patch Restoring src/synaptics.c Removing patch 101_resolution_detect_option.patch Restoring include/synaptics-properties.h Restoring man/synaptics.man Restoring src/synapticsstr.h Restoring src/properties.c Restoring src/synaptics.c Restoring tools/synclient.c Removing patch 02-do-not-use-synaptics-for-keyboards.patch Restoring conf/11-x11-synaptics.fdi No patches applied dh_autoreconf_clean -O--builddirectory=build/ dh_clean -O--builddirectory=build/ dpkg-source -b xserver-xorg-input-synaptics-1.6.2 dpkg-source: warning: no source format specified in debian/source/format, see dpkg-source(1) dpkg-source: info: using source format `1.0' dpkg-source: info: building xserver-xorg-input-synaptics using existing xserver-xorg-input-synaptics_1.6.2.orig.tar.gz dpkg-source: info: building xserver-xorg-input-synaptics in xserver-xorg-input-synaptics_1.6.2-1ubuntu1~precise2.diff.gz dpkg-source: warning: the diff modifies the following upstream files: autogen.sh docs/README.alps docs/tapndrag.dia docs/trouble-shooting.txt dpkg-source: info: use the '3.0 (quilt)' format to have separate and documented changes to upstream files, see dpkg-source(1) dpkg-source: info: building xserver-xorg-input-synaptics in xserver-xorg-input-synaptics_1.6.2-1ubuntu1~precise2.dsc debian/rules build dh build --with quilt,autoreconf,xsf --builddirectory=build/ dh_testdir -O--builddirectory=build/ dh_quilt_patch -O--builddirectory=build/ Applying patch 02-do-not-use-synaptics-for-keyboards.patch patching file conf/11-x11-synaptics.fdi Hunk #1 succeeded at 9 (offset 7 lines). Applying patch 101_resolution_detect_option.patch patching file include/synaptics-properties.h patching file man/synaptics.man patching file src/properties.c Hunk #3 succeeded at 787 (offset 6 lines). patching file src/synaptics.c Hunk #2 succeeded at 1403 (offset 3 lines). Hunk #3 succeeded at 1421 (offset 3 lines). patching file src/synapticsstr.h patching file tools/synclient.c Applying patch 103_enable_cornertapping.patch patching file src/synaptics.c Hunk #1 succeeded at 762 with fuzz 1 (offset 202 lines). Applying patch 104_always_enable_tapping.patch patching file src/synaptics.c Hunk #1 succeeded at 662 with fuzz 2 (offset 6 lines). Applying patch 106_always_enable_vert_edge_scroll.patch patching file src/synaptics.c Hunk #1 succeeded at 673 (offset 174 lines). Applying patch 115_evdev_only.patch patching file conf/50-synaptics.conf Hunk #1 succeeded at 14 with fuzz 2. Applying patch 118_quell_error_msg.patch patching file tools/synclient.c patching file tools/syndaemon.c Applying patch 124_syndaemon_events.patch patching file tools/syndaemon.c Applying patch 125_option_rec_revert.patch patching file test/fake-symbols.c patching file test/fake-symbols.h Applying patch 126_ubuntu_xi22.patch patching file configure.ac Applying patch 128_disable_three_click_action.patch patching file src/synaptics.c Hunk #1 succeeded at 671 (offset 174 lines). Applying patch 129_disable_three_touch_tap.patch patching file src/synaptics.c Hunk #1 succeeded at 665 (offset 32 lines). Applying patch 130_dont_enable_rightbutton_area.patch patching file conf/50-synaptics.conf Applying patch 131_reset-num_active_touches-on-deviceoff.patch patching file src/synaptics.c Applying patch 201-wait.patch patching file src/eventcomm.c Hunk #1 FAILED at 750. Hunk #2 FAILED at 775. Hunk #3 FAILED at 784. 3 out of 3 hunks FAILED -- rejects in file src/eventcomm.c Patch 201-wait.patch does not apply (enforce with -f) dh_quilt_patch: quilt --quiltrc /dev/null push -a || test $? = 2 returned exit code 1 make: *** [build] Error 25 dpkg-buildpackage: error: debian/rules build gave error exit status 2

    Read the article

  • Sharing A Stage: JDeveloper/ADF & NetBeans/Java EE 6?

    - by Geertjan
    A highlight for me during last week's Oracle Developer Day in Romania (which I blogged about here) was meeting Jernej Kaše (who is from Slovenia, just like my philosopher hero Slavoj Žižek), who is an Oracle Fusion Middleware evangelist. At the conference, while I was presenting NetBeans and Java EE 6 in one room, Jernej was presenting JDeveloper and ADF in another room. The application he created looks as follows, i.e., a realistic CRUD app, with a master/detail view, a search feature, and validation: In a conversation during a break, we started imagining a scenario where the two of us would be on the same stage, taking turns talking about NetBeans/Java EE and JDeveloper/ADF. In that way, attendees at a conference wouldn't need to choose which of the two topics to attend, because they'd be handled in the same session, with the session possibly being longer so that sufficient time could be spent on the respective technologies. (The JDeveloper/ADF session would then not be competing with the NetBeans/Java EE 6 session, since they'd be handled simultaneously.) The session would focus on the similarities/differences between the two respective tools/solutions, which would be extremely interesting and also unique. The crucial question in making this kind of co-presentation possible is whether (and how quickly) an application such as the one created above with JDeveloper/ADF could be created with NetBeans/Java EE 6. The NetBeans/Java EE 6 story is extremely strong on the model and controler levels, but less strong on the view layer. Though there are choices between using PrimeFaces, RichFaces, and IceFaces, that support is quite limited in the absence of a visual designer or of other specific tools (e.g., code generators to generate snippets of PrimeFaces) connected to JSF component libraries. However, it so happens that in recent months we at NetBeans have established really good connections with the PrimeFaces team (more about that another time). So I asked them what it would take to write the above UI in PrimeFaces. The PrimeFaces team were very helpful. They sent me the following screenshot, which is of the UI they created in PrimeFaces, reproducing the ADF screenshot above: Of course, the above is purely the UI layer, there's no EJB and entity classes and data connection hooked into it yet. However, this is the Facelets file that the PrimeFaces team sent me, i.e., using the PrimeFaces component library, that produces the above result: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.org/ui"> <f:view> <h:head> <style type="text/css"> .alignRight { text-align: right; } .alignLeft { text-align: left; } .alignTop { vertical-align: top; } .ui-validation-required { color: red; font-size: 14px; margin-right: 5px; position: relative; vertical-align: top; } .ui-selectonemenu .ui-selectonemenu-trigger .ui-icon { margin-top: 7px !important; } </style> </h:head> <h:body> <h:form prependId="false" id="form"> <p:panel header="Employees"> <h:panelGrid columns="4" id="searchPanel"> Search <p:selectOneMenu> <f:selectItem itemLabel="FirstName" itemValue="FirstName" /> <f:selectItem itemLabel="LastName" itemValue="LastName" /> <f:selectItem itemLabel="Email" itemValue="Email" /> <f:selectItem itemLabel="PhoneNumber" itemValue="PhoneNumber" /> </p:selectOneMenu> <p:inputText /> <p:commandLink process="searchPanel" update="@form"> <h:graphicImage name="next.gif" library="img" /> </p:commandLink> </h:panelGrid> <h:panelGrid columns="3" columnClasses="alignTop,,alignTop" style="width:90%;margin-left:10%"> <h:panelGrid columns="2" columnClasses="alignRight,alignLeft"> <h:outputLabel for="firstName">FirstName</h:outputLabel> <p:inputText id="firstName" /> <h:outputLabel for="lastName"> <sup class="ui-validation-required">*</sup>LastName </h:outputLabel> <p:inputText id="lastName" style="width:250px;" /> <h:outputLabel for="email"> <sup class="ui-validation-required">*</sup>Email </h:outputLabel> <p:inputText id="email" style="width:250px;" /> <h:outputLabel for="phoneNumber" value="PhoneNumber" /> <p:inputMask id="phoneNumber" mask="999.999.9999" /> <h:outputLabel for="hireDate"> <sup class="ui-validation-required">*</sup>HireDate</h:outputLabel> <p:calendar id="hireDate" pattern="MM/dd/yyyy" showOn="button" /> </h:panelGrid> <p:outputPanel style="min-width:40px;" /> <h:panelGrid columns="2" columnClasses="alignRight,alignLeft"> <h:outputLabel for="jobId"> <sup class="ui-validation-required">*</sup>JobId </h:outputLabel> <p:selectOneMenu id="jobId" > <f:selectItem itemLabel="Administration Vice President" itemValue="Administration Vice President" /> <f:selectItem itemLabel="Vice President" itemValue="Vice President" /> </p:selectOneMenu> <h:outputLabel for="salary">Salary</h:outputLabel> <p:inputText id="salary" styleClass="alignRight" /> <h:outputLabel for="commissionPct">CommissionPct</h:outputLabel> <p:inputText id="commissionPct" style="width:30px;" maxlength="3" /> <h:outputLabel for="manager">ManagerId</h:outputLabel> <p:selectOneMenu id="manager"> <f:selectItem itemLabel="Steven King" itemValue="Steven" /> <f:selectItem itemLabel="Michael Cook" itemValue="Michael" /> <f:selectItem itemLabel="John Benjamin" itemValue="John" /> <f:selectItem itemLabel="Dav Glass" itemValue="Dav" /> </p:selectOneMenu> <h:outputLabel for="department">DepartmentId</h:outputLabel> <p:selectOneMenu id="department"> <f:selectItem itemLabel="90" itemValue="90" /> <f:selectItem itemLabel="80" itemValue="80" /> <f:selectItem itemLabel="70" itemValue="70" /> <f:selectItem itemLabel="60" itemValue="60" /> <f:selectItem itemLabel="50" itemValue="50" /> <f:selectItem itemLabel="40" itemValue="40" /> <f:selectItem itemLabel="30" itemValue="30" /> <f:selectItem itemLabel="20" itemValue="20" /> </p:selectOneMenu> </h:panelGrid> </h:panelGrid> <p:outputPanel id="buttonPanel"> <p:commandButton value="First" process="@this" update="@form" /> <p:commandButton value="Previous" process="@this" update="@form" style="margin-left:15px;" /> <p:commandButton value="Next" process="@this" update="@form" style="margin-left:15px;" /> <p:commandButton value="Last" process="@this" update="@form" style="margin-left:15px;" /> </p:outputPanel> <p:tabView style="margin-top:25px"> <p:tab title="Job History"> <p:dataTable var="history"> <p:column headerText="StartDate"> <h:outputText value="#{history.startDate}"> <f:convertDateTime pattern="MM/dd/yyyy" /> </h:outputText> </p:column> <p:column headerText="EndDate"> <h:outputText value="#{history.endDate}"> <f:convertDateTime pattern="MM/dd/yyyy" /> </h:outputText> </p:column> <p:column headerText="JobId"> <h:outputText value="#{history.jobId}" /> </p:column> <p:column headerText="DepartmentId"> <h:outputText value="#{history.departmentIdId}" /> </p:column> </p:dataTable> </p:tab> </p:tabView> </p:panel> </h:form> </h:body> </f:view> </html> Right now, NetBeans IDE only has code completion to create the above. So there's not much help for creating such a UI right now. I don't believe that a visual designer is mandatory to create the above. A few code generators and file templates could do the job too. And I'm looking forward to seeing those kinds of tools for PrimeFaces, as well as other JSF component libraries, appearing in NetBeans IDE in upcoming releases. A related option would be for the NetBeans generated CRUD app to include the option of having a master/detail view, as well as the option of having a search feature, i.e., the application generators would provide the option of having additional features typical in Java enterprise apps. In the absence of such tools, there still is room, I believe, for NetBeans/Java EE and JDeveloper/ADF sharing a stage at a conference. The above file would have been prepared up front and the presenter would state that fact. The UI layer is only one aspect of a Java EE 6 application, so that the presenter would have ample other features to show (i.e., the entity class generation, the tools for working with servlets, with session beans, etc) prior to getting to the point where the statement would be made: "On the UI layer, I have prepared this Facelets file, which I will now show you can be connected to the lower layers of the application as follows." At that point, the session beans could be hooked into the Facelets file, the file would be saved, the browser refreshed, and then the whole application would work exactly as the ADF application does. So, Jernej, let's share a stage soon!

    Read the article

  • Postfix and tmpfs for /var/spool

    - by Rob Fisher
    My main disk is an SSD so in order to preserve its lifetime by reducing writes I followed some advice and made /var/spool a ram disk by adding this line to /etc/fstab: tmpfs /var/spool tmpfs defaults,noatime,mode=1777 0 0 Later I configured postfix because I have a RAID array on my system and mdadm wants to send me email if the RAID array fails which sounds like a fine idea. Email sending worked fine until I rebooted, at which point: postfix: fatal: open /etc/postfix-out/main.cf: No such file or directory The fix for this is apparently: mkdir /var/spool/postfix postfix check Then I found I also had to do: mkfifo /var/spool/postfix/public/pickup service postfix restart Now sending emails works fine...until the next reboot. So: what is the most correct way to recreate the contents of /var/spool/postfix automatically at boot time if it does not exist? I am using Ubuntu Server 12.04.

    Read the article

  • A tale from a Stalker

    - by Peter Larsson
    Today I thought I should write something about a stalker I've got. Don't get me wrong, I have way more fans than stalkers, but this stalker is particular persistent towards me. It all started when I wrote about Relational Division with Sets late last year(http://weblogs.sqlteam.com/peterl/archive/2010/07/02/Proper-Relational-Division-With-Sets.aspx) and no matter what he tried, he didn't get a better performing query than me. But this I didn't click until later into this conversation. He must have saved himself for 9 months before posting to me again. Well... Some days ago I get an email from someone I thought i didn't know. Here is his first email Hi, I want a proper solution for achievement the result. The solution must be standard query, means no using as any native code like TOP clause, also the query should run in SQL Server 2000 (no CTE use). We have a table with consecutive keys (nbr) that is not exact sequence. We need bringing all values related with nearest key in the current key row. See the DDL: CREATE TABLE Nums(nbr INTEGER NOT NULL PRIMARY KEY, val INTEGER NOT NULL); INSERT INTO Nums(nbr, val) VALUES (1, 0),(5, 7),(9, 4); See the Result: pre_nbr     pre_val     nbr         val         nxt_nbr     nxt_val ----------- ----------- ----------- ----------- ----------- ----------- NULL        NULL        1           0           5           7 1           0           5           7           9           4 5           7           9           4           NULL        NULL The goal is suggesting most elegant solution. I would like see your best solution first, after that I will send my best (if not same with yours)   Notice there is no name, no please or nothing polite asking for my help. So, on the top of my head I sent him two solutions, following the rule "Work on SQL Server 2000 and only standard non-native code".     -- Peso 1 SELECT               pre_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.pre_nbr                              ) AS pre_val,                              d.nbr,                              d.val,                              d.nxt_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.nxt_nbr                              ) AS nxt_val FROM                (                                                           SELECT               (                                                                                                                     SELECT               MAX(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr < n.nbr                                                                                        ) AS pre_nbr,                                                                                        n.nbr,                                                                                        n.val,                                                                                        (                                                                                                                     SELECT               MIN(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr > n.nbr                                                                                        ) AS nxt_nbr                                                           FROM                dbo.Nums AS n                              ) AS d -- Peso 2 CREATE TABLE #Temp                                                         (                                                                                        ID INT IDENTITY(1, 1) PRIMARY KEY,                                                                                        nbr INT,                                                                                        val INT                                                           )   INSERT                                            #Temp                                                           (                                                                                        nbr,                                                                                        val                                                           ) SELECT                                            nbr,                                                           val FROM                                             dbo.Nums ORDER BY         nbr   SELECT                                            pre.nbr AS pre_nbr,                                                           pre.val AS pre_val,                                                           t.nbr,                                                           t.val,                                                           nxt.nbr AS nxt_nbr,                                                           nxt.val AS nxt_val FROM                                             #Temp AS pre RIGHT JOIN      #Temp AS t ON t.ID = pre.ID + 1 LEFT JOIN         #Temp AS nxt ON nxt.ID = t.ID + 1   DROP TABLE    #Temp Notice there are no indexes on #Temp table yet. And here is where the conversation derailed. First I got this response back Now my solutions: --My 1st Slt SELECT T2.*, T1.*, T3.*   FROM Nums AS T1        LEFT JOIN Nums AS T2          ON T2.nbr = (SELECT MAX(nbr)                         FROM Nums                        WHERE nbr < T1.nbr)        LEFT JOIN Nums AS T3          ON T3.nbr = (SELECT MIN(nbr)                         FROM Nums                        WHERE nbr > T1.nbr); --My 2nd Slt SELECT MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END) AS pre_nbr,        (SELECT val FROM Nums WHERE nbr = MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END)) AS pre_val,        N1.nbr AS cur_nbr, N1.val AS cur_val,        MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END) AS nxt_nbr,        (SELECT val FROM Nums WHERE nbr = MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END)) AS nxt_val   FROM Nums AS N1,        Nums AS N2  GROUP BY N1.nbr, N1.val;   /* My 1st Slt Table 'Nums'. Scan count 7, logical reads 14 My 2nd Slt Table 'Nums'. Scan count 4, logical reads 23 Peso 1 Table 'Nums'. Scan count 9, logical reads 28 Peso 2 Table '#Temp'. Scan count 0, logical reads 7 Table 'Nums'. Scan count 1, logical reads 2 Table '#Temp'. Scan count 3, logical reads 16 */  To this, I emailed him back asking for a scalability test What if you try with a Nums table with 100,000 rows? His response to that started to get nasty.  I have to say Peso 2 is not acceptable. As I said before the solution must be standard, ORDER BY is not part of standard SELECT. Try this without ORDER BY:  Truncate Table Nums INSERT INTO Nums (nbr, val) VALUES (1, 0),(9,4), (5, 7)  So now we have new rules. No ORDER BY because it's not standard SQL! Of course I asked him  Why do you have that idea? ORDER BY is not standard? To this, his replies went stranger and stranger Standard Select = Set-based (no any cursor) It’s free to know, just refer to Advanced SQL Programming by Celko or mail to him if you accept comments from him. What the stalker probably doesn't know, is that I and Mr Celko occasionally are involved in some conversation and thus we exchange emails. I don't know if this reference to Mr Celko was made to intimidate me either. So I answered him, still polite, this What do you mean? The SELECT itself has a ”cursor under the hood”. Now the stalker gets rude  But however I mean the solution must no containing any order by, top... No problem, I do not like Peso 2, it’s very non-intelligent and elementary. Yes, Peso 2 is elementary but most performing queries are... And now is the time where I started to feel the stalker really wanted to achieve something else, so I wrote to him So what is your goal? Have a query that performs well, or a query that is super-portable? My Peso 2 outperforms any of your code with a factor of 100 when using more than 100,000 rows. While I awaited his answer, I posted him this query Ok, here is another one -- Peso 3 SELECT             MAX(CASE WHEN d = 1 THEN nbr ELSE NULL END) AS pre_nbr,                    MAX(CASE WHEN d = 1 THEN val ELSE NULL END) AS pre_val,                    MAX(CASE WHEN d = 0 THEN nbr ELSE NULL END) AS nbr,                    MAX(CASE WHEN d = 0 THEN val ELSE NULL END) AS val,                    MAX(CASE WHEN d = -1 THEN nbr ELSE NULL END) AS nxt_nbr,                    MAX(CASE WHEN d = -1 THEN val ELSE NULL END) AS nxt_val FROM               (                              SELECT    nbr,                                        val,                                        ROW_NUMBER() OVER (ORDER BY nbr) AS SeqID                              FROM      dbo.Nums                    ) AS s CROSS JOIN         (                              VALUES    (-1),                                        (0),                                        (1)                    ) AS x(d) GROUP BY           SeqID + x.d HAVING             COUNT(*) > 1 And here is the stats Table 'Nums'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. It beats the hell out of your queries…. Now I finally got a response from my stalker and now I also clicked who he was. This is his reponse Why you post my original method with a bit change under you name? I do not like it. See: http://www.sqlservercentral.com/Forums/Topic468501-362-14.aspx ;WITH C AS ( SELECT seq_nbr, k,        DENSE_RANK() OVER(ORDER BY seq_nbr ASC) + k AS grp_fct   FROM [Sample]         CROSS JOIN         (VALUES (-1), (0), (1)         ) AS D(k) ) SELECT MIN(seq_nbr) AS pre_value,        MAX(CASE WHEN k = 0 THEN seq_nbr END) AS current_value,        MAX(seq_nbr) AS next_value   FROM C GROUP BY grp_fct HAVING min(seq_nbr) < max(seq_nbr); These posts: Posted Tuesday, April 12, 2011 10:04 AM Posted Tuesday, April 12, 2011 1:22 PM Why post a solution where will not work in SQL Server 2000? Wait a minute! His own solution is using both a CTE and a ranking function so his query will not work on SQL Server 2000! Bummer... The reference to "Me not like" are my exact words in a previous topic on SQLTeam.com and when I remembered the phrasing, I also knew who he was. See this topic http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 where he writes a query and posts it under my name, as if I wrote it. So I answered him this (less polite). Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea? Besides, “M S solution” doesn’t work.   This is the result I get pre_value        current_value                             next_value 1                           1                           5 1                           5                           9 5                           9                           9   And I did nothing like you did here, where you posted a solution which you “thought” I should write http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? After a few hours I get this email back. I don't fully understand it, but it's probably a language barrier. >>Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea?<< You right, but do not think you are the first creator of this.   >>Besides, “M S Solution” doesn’t work. This is the result I get <<   Why you get so unimportant mistake? See this post to correct it: Posted 4/12/2011 8:22:23 PM >> So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. <<  Again, why you get some unimportant incompatibility? You offer that solution for current goals not me  >> All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? <<  No, I only wanted to know who you will solve it. Now I know you do not have a special solution. No problem. No problem for me either. So I just answered him I am not the first, and you are not the first to come up with this idea. So what is your problem? I am pretty sure other people have come up with the same idea before us. I used this technique all the way back to 2007, see http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93911 Let's see if he returns...  He did! >> So what is your problem? << Nothing Thanks for all replies; maybe we have some competitions in future, maybe. Also I like you but you do not attend it. Your behavior with me is not friendly. Not any meeting… Regards //Peso

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

  • 3 Performance Presentations from SAE added to the portal

    - by uwes
    The following three presentation have been added to eSTEP portal: Oracle's Systems Performance Oct 2012 Update Oracle Leads the Way on Realistic Sizing Oracle's Performance: Oracle SPARC SuperCluster All presentations are created by Brad Carlile, Sr. Director Strategic Applications Engineering, SAE. How to get to the presentations: URL: http://launch.oracle.com/ Email Address: <provide your email address>Access URL/Page Token: eSTEP_2011To get access push Agree button on the left side of the page. Click on eSTEP Download (tab band on the top) ---> presentations at right hand side or Click on Miscellaneous (menu on left hand side) ---> presentations at right hand side

    Read the article

  • Oracle Social Network Developer Challenge: Fishbowl Solutions

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Today, I give you the final entry in the Oracle Social Network Developer Challenge, held last week during OpenWorld. This one comes from Friend of the ‘Lab and Fishbowl Solutions (@fishbowle20) hacker, John Sim (@jrsim_uix), whom you might remember from his XBox Kinect demo at COLLABORATE 12 (presentation slides and abstract) hacks and other exploits with WebCenter. We put this challenge together specifically for developers like John, who like to experiment with new tools and push the envelope of what’s possible and build cool things, and as you can see from his entry John did just that, mashing together Google Maps and Oracle Social Network into a mobile app built with PhoneGap that uses the device’s camera and GPS to keep teams on the move in touch. He calls it a Mobile GeoTagging Solution, but I think Avengers Assemble! would have equally descriptive, given that was obviously his inspiration. Here’s his description of the mobile app: My proposed solution was to design and simplify GeoLocation mapping, and automate updates for users and teams on the move; who don’t have access to a laptop or want to take their ipads out – but allow them to make quick updates to OSN and upload photos taken from their mobile device – there and then. As part of this; the plan was to include a rules engine that could be configured by the user to allow the device to automatically update and post messages when they arrived at a set location(s). Inspiration for this came from on{x} – automate your life. Unfortunately, John didn’t make it to the conference to show off his hard work in person, but luckily, he had a colleague from Fishbowl and a video to showcase his work.    Here are some shots of John’s mobile app for your viewing pleasure: John’s thinking is sound. Geolocation is usually relegated to consumer use cases, thanks to services like foursquare, but distributed teams working on projects out in the world definitely need a way to stay in contact. Consider a construction job. Different contractors all converge on a single location, and time is money. Rather than calling or texting each other and risking a distracted driving accident, an app like John’s allows everyone on the job to see exactly where the other contractors are. Using his GPS rules, they could easily be notified about how close each is to the site, definitely useful when you have a flooring contractor sitting idle, waiting for an electrician to finish the wiring. The best part is that the project manager or general contractor could stay updated on all the action (or inaction) using Oracle Social Network, either sitting at a desk using the browser app or desktop client or on the go, using one of the native mobile apps built for Oracle Social Network. I can see this being used by insurance adjusters too, and really any team that, erm, assembles at a given spot. Of course, it’s also useful for meeting at the pub after the day’s work is done. Beyond people, this solution could also be implemented for physical objects that are in route to a destination. Say you’re a customer waiting on rail shipment or a package delivery. You could track your valuable’s whereabouts easily as they report their progress via checkins. If they deviated from the GPS rules, you’d be notified. You might even be able to get a picture into Oracle Social Network with some light hacking. Thanks to John and his colleagues at Fishbowl for participating in our challenge. We hope everyone had a good experience. Make sure to check out John’s blog post on his work and the experience using Oracle Social Network. Although this is the final, official entry we had, tomorrow, I’ll show you the work of someone who finished code, but wasn’t able to make the judging event. Stay tuned.

    Read the article

  • Are there any risk if your DNS's SOA or admin contact are using the same domain as the DNS

    - by Yoga
    For example, Google.com [1] The SOA email is : dns-admin.google.com The contact is: Administrative Contact: DNS Admin Google Inc. dns-admin.google.com As you can see, both are using google.com, I am thinking it is safe to use the same domain, i.e. consider the case you lost control of the domain, you can receive email also. (Of course Google is a public company so the chance is low, but might occur for smaller company that their domain might be stolen..) So, do you recommend use your the same domain as the contact or others free services such as gmail? [1] http://whois.domaintools.com/google.com

    Read the article

  • Associating your MentionNotifier subscriptions with OAuth

    - by Tim Hibbard
    We recently added OAuth to MentionNotifier so that users can quickly view and edit their subscriptions without needed an additional login.  This is enabled by default for new users, but existing users will need to do the following steps to associate their subscriptions with OAuth: 1)  Go to http://software.engraph.com/ManageMentionNotifier 2)  Click “Sign in with Twitter” 3)  Verify that your twittername and email are correct 4)  Click "Associate with OAuth" This will also allow you to reply to notification emails and MentionNotifier will tweet on your behalf.  This is made possible by @sidePop written by @ferventcoder Note that the reply by email is new and buggy, so make sure that what was tweeted is correct and as expected. If you run into any issues, sent me a reply to @timhibbard. You can also join the MentionNotifier fan page on facebook, or follow @MentionNotifier on twitter.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >