Search Results

Search found 1770 results on 71 pages for 'stupid idiot'.

Page 65/71 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • The Windows Store... why did I sign up with this mess again?

    - by FransBouma
    Yesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8. As this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt. Now, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application. Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important. So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately. I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks: Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?" A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible! After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me. "Ok fair enough". The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card. Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month. "So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job. Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know. Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was. Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet. I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure! I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though. Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough. To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system. As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework". While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time. Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses. Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed. Craig Stuntz sent me a link to an old blog post of his regarding code signing and uploading to Microsoft's old mobile store from back in the WinMo5 days: http://blogs.teamb.com/craigstuntz/2006/10/11/28357/. Good read and good background info about how little things changed over the years. I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.

    Read the article

  • ASP.NET 3.5 GridView - row editing - dynamic binding to a DropDownList

    - by marc_s
    This is driving me crazy :-) I'm trying to get a ASP.NET 3.5 GridView to show a selected value as string when being displayed, and to show a DropDownList to allow me to pick a value from a given list of options when being edited. Seems simple enough? My gridview looks like this (simplified): <asp:GridView ID="grvSecondaryLocations" runat="server" DataKeyNames="ID" OnInit="grvSecondaryLocations_Init" OnRowCommand="grvSecondaryLocations_RowCommand" OnRowCancelingEdit="grvSecondaryLocations_RowCancelingEdit" OnRowDeleting="grvSecondaryLocations_RowDeleting" OnRowEditing="grvSecondaryLocations_RowEditing" OnRowUpdating="grvSecondaryLocations_RowUpdating" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:Label ID="lblPbxTypeCaption" runat="server" Text='<%# Eval("PBXTypeCaptionValue") %>' /> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="ddlPBXTypeNS" runat="server" Width="200px" DataTextField="CaptionValue" DataValueField="OID" /> </EditItemTemplate> </asp:TemplateField> </asp:GridView> The grid gets displayed OK when not in editing mode - the selected PBX type shows its value in the asp:Label control. No surprise there. I load the list of values for the DropDownList into a local member called _pbxTypes in the OnLoad event of the form. I verified this - it works, the values are there. Now my challenge is: when the grid goes into editing mode for a particular row, I need to bind the list of PBX's stored in _pbxTypes. Simple enough, I thought - just grab the drop down list object in the RowEditing event and attach the list: protected void grvSecondaryLocations_RowEditing(object sender, GridViewEditEventArgs e) { grvSecondaryLocations.EditIndex = e.NewEditIndex; GridViewRow editingRow = grvSecondaryLocations.Rows[e.NewEditIndex]; DropDownList ddlPbx = (editingRow.FindControl("ddlPBXTypeNS") as DropDownList); if (ddlPbx != null) { ddlPbx.DataSource = _pbxTypes; ddlPbx.DataBind(); } .... (more stuff) } Trouble is - I never get anything back from the FindControl call - seems like the ddlPBXTypeNS doesn't exist (or can't be found). What am I missing?? Must be something really stupid.... but so far, all my Googling, reading up on GridView controls, and asking buddies hasn't helped. Who can spot the missing link? ;-) Marc

    Read the article

  • MIPS: removing non alpha-numeric characters from a string

    - by Kron
    I'm in the process of writing a program in MIPS that will determine whether or not a user entered string is a palindrome. It has three subroutines which are under construction. Here is the main block of code, subroutines to follow with relevant info: .data Buffer: .asciiz " " # 80 bytes in Buffer intro: .asciiz "Hello, please enter a string of up to 80 characters. I will then tell you if that string was a palindrome!" .text main: li $v0, 4 # print_string call number la $a0, intro # pointer to string in memory syscall li $v0, 8 #syscall code for reading string la $a0, Buffer #save read string into buffer li $a1, 80 #string is 80 bytes long syscall li $s0, 0 #i = 0 li $t0, 80 #max for i to reach la $a0, Buffer jal stripNonAlpha li $v0, 4 # print_string call number la $a0, Buffer # pointer to string in memory syscall li $s0, 0 jal findEnd jal toUpperCase li $v0, 4 # print_string call number la $a0, Buffer # pointer to string in memory syscall Firstly, it's supposed to remove all non alpha-numeric characters from the string before hand, but when it encounters a character designated for removal, all characters after that are removed. stripNonAlpha: beq $s0, $t0, stripEnd #if i = 80 end add $t4, $s0, $a0 #address of Buffer[i] in $t4 lb $s1, 0($t4) #load value of Buffer[i] addi $s0, $s0, 1 #i = i + 1 slti $t1, $s1, 48 #if ascii code is less than 48 bne $t1, $zero, strip #remove ascii character slti $t1, $s1, 58 #if ascii code is greater than 57 #and slti $t2, $s1, 65 #if ascii code is less than 65 slt $t3, $t1, $t2 bne $t3, $zero, strip #remove ascii character slti $t1, $s1, 91 #if ascii code is greater than 90 #and slti $t2, $s1, 97 #if ascii code is less than 97 slt $t3, $t1, $t2 bne $t3, $zero, strip #remove ascii character slti $t1, $s1, 123 #if ascii character is greater than 122 beq $t1, $zero, strip #remove ascii character j stripNonAlpha #go to stripNonAlpha strip: #add $t5, $s0, $a0 #address of Buffer[i] in $t5 sb $0, 0($t4) #Buffer[i] = 0 #addi $s0, $s0, 1 #i = i + 1 j stripNonAlpha #go to stripNonAlpha stripEnd: la $a0, Buffer #save modified string into buffer jr $ra #return Secondly, it is supposed to convert all lowercase characters to uppercase. toUpperCase: beq $s0, $s2, upperEnd add $t4, $s0, $a0 lb $s1, 0($t4) addi $s1, $s1, 1 slti $t1, $s1, 97 #beq $t1, $zero, upper slti $t2, $s1, 123 slt $t3, $t1, $t2 bne $t1, $zero, upper j toUpperCase upper: add $t5, $s0, $a0 addi $t6, $t6, -32 sb $t6, 0($t5) j toUpperCase upperEnd: la $a0, Buffer jr $ra The final subroutine, which checks if the string is a palindrome isn't anywhere near complete at the moment. I'm having trouble finding the end of the string because I'm not sure what PC-SPIM uses as the carriage return character. Any help is appreciated, I have the feeling most of my problems result from something silly and stupid so feel free to point out anything, no matter how small.

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • Detecting 'stealth' web-crawlers

    - by Jacco
    What options are there to detect web-crawlers that do not want to be detected? (I know that listing detection techniques will allow the smart stealth-crawler programmer to make a better spider, but I do not think that we will ever be able to block smart stealth-crawlers anyway, only the ones that make mistakes.) I'm not talking about the nice crawlers such as googlebot and Yahoo! Slurp. I consider a bot nice if it: identifies itself as a bot in the user agent string reads robots.txt (and obeys it) I'm talking about the bad crawlers, hiding behind common user agents, using my bandwidth and never giving me anything in return. There are some trapdoors that can be constructed updated list (thanks Chris, gs): Adding a directory only listed (marked as disallow) in the robots.txt, Adding invisible links (possibly marked as rel="nofollow"?), style="display: none;" on link or parent container placed underneath another element with higher z-index detect who doesn't understand CaPiTaLiSaTioN, detect who tries to post replies but always fail the Captcha. detect GET requests to POST-only resources detect interval between requests detect order of pages requested detect who (consistently) requests https resources over http detect who does not request image file (this in combination with a list of user-agents of known image capable browsers works surprisingly nice) Some traps would be triggered by both 'good' and 'bad' bots. you could combine those with a whitelist: It trigger a trap It request robots.txt? It doest not trigger another trap because it obeyed robots.txt One other important thing here is: Please consider blind people using a screen readers: give people a way to contact you, or solve a (non-image) Captcha to continue browsing. What methods are there to automatically detect the web crawlers trying to mask themselves as normal human visitors. Update The question is not: How do I catch every crawler. The question is: How can I maximize the chance of detecting a crawler. Some spiders are really good, and actually parse and understand html, xhtml, css javascript, VB script etc... I have no illusions: I won't be able to beat them. You would however be surprised how stupid some crawlers are. With the best example of stupidity (in my opinion) being: cast all URLs to lower case before requesting them. And then there is a whole bunch of crawlers that are just 'not good enough' to avoid the various trapdoors.

    Read the article

  • Team Foundation Server Setup/Access

    - by Angel Brighteyes
    What I need: A TFS 2010 Setup that allows 2 application developers to access the TFS from remote locations. How it is setup: Server 2008 Standard 2g Ram 300g HD space SharePoint Server 2007, using SQL Server 2005 SQL Server 2008 Standard Team Foundation Server 2010 IIS 7 Sharepoint Bindings: TFS.DynAccount.Me:80; TFS:80 TFS Bindings: TFS.DynAccount.Me:8080; TFS:8080 Using DynDNS service to account for the dynamic ip address being used, this is a requirement for the moment until I can get a better isp package. Access using Local Accounts Server is not setup on a domain, or as a domain. Consequently I did not setup AD services. Problem: When logged into TFS using my credentials TFS\AdminUser through the DynDNS account TFS.DynAccount.Me I recieve the 'Red X of Death' on the Documents and Reports folder. When logged into the TFS through the local peer to peer network using the same credentials TFS\AdminUser I do not receive the 'Red X of Death' problem. Further Troubleshooting: When users 'Right Click' the 'TeamProject1' Click 'Show Project Portal' it tries to take them to http://TFS:8080 instead of http://TFS.DynAccount.Me:8080, which doing further research I am assuming that it is because team foundation server was setup with a local name of TFS instead of 'TFS.DynAccount.Me' as specified here in Visual Studio Magazines: The Red X of Death. Users can Access the Team Portal for SharePoint via http://TFS.DynAccount.Me/TeamCollection/TeamProject so it is not like we are dead in the water or anything. However, as most employees/staff are prone to do, they have expressed a great distaste for having to do it this way and just be patient until the current project is finished since we are under a very strict deadline. Is there a way to set this up differently, or change some settings someplace, reinstall it, point a CName record for our domain website to the DynAccount (e.g. TFS.OurDomain.com points to TFS.DynAccount.Me, which consequently does allow access to the http site without issues), or something. I really don't feel like after all the time and effort I have spent into, first the cost, second the bloody install, third learning SharePoint well enough, fourth the hours into days spent on this, fifth more troubleshooting, sixth employee headaches to just let it lay where it is at. I figure in my spare/off time I would keep trying to get this to work. So I really appreciate any help any one can give me. I know this is probably something really stupid simple that I will 'Face Palm' over, but at the moment the stress and frustration just has me beat. Thank you again, this community has always been a great help.

    Read the article

  • jQuery UI portlets - toggle portlets to save to a cookie (half way there!)

    - by Gareth
    Hi, I'm a bit of a jQuery n00b so please excuse me if this seems like a stupid question. I am creating a site using the jQuery UI more specifically the sortable portlets. I have been able store whether or not a portlet is has been open or closed to a cookie. This is done using the following code. The slider ID is currently where the controls are stored to turn each portlet on and off. var cookie = $.cookie("hidden"); var hidden = cookie ? cookie.split("|").getUnique() : []; var cookieExpires = 7; // cookie expires in 7 days, or set this as a date object to specify a date // Remember content that was hidden $.each( hidden, function(){ var pid = this; //parseInt(this,10); $('#' + pid).hide(); $("#slider div[name='" + pid + "']").addClass('add'); }) // Add Click functionality $("#slider div").click(function(){ $(this).toggleClass('add'); var el = $("div#" + $(this).attr('name')); el.toggle(); updateCookie(el); }); $('a.toggle').click(function(){ $(this).parents(".portlet").hide(); // *** Below line just needs to select the correct 'id' and insert as selector i.e ('#slider div#block-1') and then update cookie! *** $('#slider div').addClass('add'); }); // Update the Cookie function updateCookie(el){ var indx = el.attr('id'); var tmp = hidden.getUnique(); if (el.is(':hidden')) { // add index of widget to hidden list tmp.push(indx); } else { // remove element id from the list tmp.splice( tmp.indexOf(indx) , 1); } hidden = tmp.getUnique(); $.cookie("hidden", hidden.join('|'), { expires: cookieExpires } ); } }) // Return a unique array. Array.prototype.getUnique = function() { var o = new Object(); var i, e; for (i = 0; e = this[i]; i++) {o[e] = 1}; var a = new Array(); for (e in o) {a.push (e)}; return a; } What I would like to do is also add a [x] into the corner of each portlet to give the user another way of hiding it but I'm unable to currently get this to store within the Cookie using the code above. Can anyone give me a pointer of how I would do this? Thanks in advance! Gareth

    Read the article

  • UITableView having one Cell with multiple UITextFields (some of them in one row) scrolling crazy

    - by Allisone
    I have a UITableView (grouped). In that tableview I have several UITableViewCells, some custom with nib, some default. One Cell (custom) with nib has several UITextfields for address information, thus also one row has zip-code and city in one row. When I get the keyboard the tableview size seems to be adjusted automatically (vs. another viewController in the app with just a scrollview where I had to code this functionality on my own) so that i can scroll to the bottom of my tableview (and see it) even though the keyboard is up. That's good. BUT when I click on a textfield the tableview gets either scrolled up, or down, I can't figure out the logic. It seems to be rather random up/down scrolling / contentOffset setting. So I have bound the Editing Did Begin events of the textfields to a function that has this code. - (IBAction)textFieldDidBeginEditing:(UITextField *)textField { CGPoint pt; CGRect rc = [textField bounds]; rc = [textField convertRect:rc toView:self.tableView]; pt = rc.origin; pt.x = 0; [self.tableView setContentOffset:pt animated:YES]; ... } This, well, it seems to work most of the time, BUT it doesn't work if I click the first textfield (the view jumps so that the second row gets to the top and the first row is out of the current visible view frame) AND it also doesn't work if I first select the zip textfield and next the city textfield (both in one row) or vice versa. If I do so, the tableview seems to jump to the (grouped tableview) top of my viewForHeaderInSection(this section with this mentioned cell with all my textfields) What is is going on ? Why is this happening ? How to fix this ? Edit This on the other hand behaves as expected (for the two Textviews wit same origin.y) if (self.tableView.contentOffset.y == pt.y) { pt.y = pt.y + 1; [self.tableView setContentOffset:pt animated:YES]; }else { [self.tableView setContentOffset:pt animated:YES]; } But this is a stupid solution. I wouldn't like to keep it that way. And this also doesn't fix the wrong jumping, when clicking the first textfield at first.

    Read the article

  • SQLiteOpenHelper.getWriteableDatabase() null pointer exception on Android

    - by Drew Dara-Abrams
    I've had fine luck using SQLite with straight, direct SQL in Android, but this is the first time I'm wrapping a DB in a ContentProvider. I keep getting a null pointer exception when calling getWritableDatabase() or getReadableDatabase(). Is this just a stupid mistake I've made with initializations in my code or is there a bigger issue? public class DatabaseProvider extends ContentProvider { ... private DatabaseHelper databaseHelper; private SQLiteDatabase db; ... @Override public boolean onCreate() { databaseHelper = new DatabaseProvider.DatabaseHelper(getContext()); return (databaseHelper == null) ? false : true; } ... @Override public Uri insert(Uri uri, ContentValues values) { db = databaseHelper.getWritableDatabase(); // NULL POINTER EXCEPTION HERE ... } private static class DatabaseHelper extends SQLiteOpenHelper { public static final String DATABASE_NAME = "cogsurv.db"; public static final int DATABASE_VERSION = 1; public static final String[] TABLES = { "people", "travel_logs", "travel_fixes", "landmarks", "landmark_visits", "direction_distance_estimates" }; // people._id does not AUTOINCREMENT, because it's set based on server's people.id public static final String[] CREATE_TABLE_SQL = { "CREATE TABLE people (_id INTEGER PRIMARY KEY," + "server_id INTEGER," + "name VARCHAR(255)," + "email_address VARCHAR(255))", "CREATE TABLE travel_logs (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "start DATE," + "stop DATE," + "type VARCHAR(15)," + "uploaded VARCHAR(1))", "CREATE TABLE travel_fixes (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "datetime DATE, " + "latitude DOUBLE, " + "longitude DOUBLE, " + "altitude DOUBLE," + "speed DOUBLE," + "accuracy DOUBLE," + "travel_mode VARCHAR(50), " + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmarks (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "name VARCHAR(150)," + "latitude DOUBLE," + "longitude DOUBLE," + "person_local_id INTEGER," + "person_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmark_visits (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "landmark_local_id INTEGER," + "landmark_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "datetime DATE," + "number_of_questions_asked INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE direction_distance_estimates (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "landmark_visit_local_id INTEGER," + "landmark_visit_server_id INTEGER," + "start_landmark_local_id INTEGER," + "start_landmark_server_id INTEGER," + "target_landmark_local_id INTEGER," + "target_landmark_server_id INTEGER," + "datetime DATE," + "direction_estimate DOUBLE," + "distance_estimate DOUBLE," + "uploaded VARCHAR(1))" }; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); Log.v(Constants.TAG, "DatabaseHelper()"); } @Override public void onCreate(SQLiteDatabase db) { Log.v(Constants.TAG, "DatabaseHelper.onCreate() starting"); // create the tables int length = CREATE_TABLE_SQL.length; for (int i = 0; i < length; i++) { db.execSQL(CREATE_TABLE_SQL[i]); } Log.v(Constants.TAG, "DatabaseHelper.onCreate() finished"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { for (String tableName : TABLES) { db.execSQL("DROP TABLE IF EXISTS" + tableName); } onCreate(db); } } } As always, thanks for the assistance! -- Not sure if this detail helps, but here's LogCat showing the exception:

    Read the article

  • html widget communicating with server

    - by Nikita Rybak
    I'm making html widget for websites. Let's say, it will display current stock indexes. In short, arbitrary website owner takes code snippet from me and includes it on his webpage http://website.com/index.html. When arbitrary user opens http://website.com/index.html, my code sends request to my server (provider.com), which performs necessary operations and returns information to user's browser. When response has arrived, user will see relevant stock value on http://website.com/index.html. In index.html service could be called like this <script type="text/javascript" src="provider.com/service.js"> </script> <div id="target_area"></div> <script type="text/javascript"> service.show("target_area", options); </script> Now, the problem is in the same origin policy: I can't just send ajax request from website.com to provided.com and return html to embed in client's webpage. I see several solutions, which I list below, but none quite satisfy me. I wonder, if you could suggest something, especially if you had some relevant experience. 1) iframe, plain and simple. Disadvantage: must have fixed dimensions + stupid scroll bars appearing in some browsers. Can be fixed with javascript, but all this browser-specific tinkering doesn't sound good to me. 2) JSONP. Problem: can't return whole chunk of html, must return only data. Then, on browser side, I'll have to use javascript to embed data into html snippet placed statically in index.html. Doesn't sound nice, because data format is not very simple and may even change later. 3) Use hidden iframe to do ajax requests. A bit tricky, but sounds like a way to go. Well, that's my thoughts on the subject. Are there any better ways? BTW, I tried to check some existing widgets too, but didn't find much useful information. All domain names used in this text are fictional and any resemblance is purely coincidental :)

    Read the article

  • Using Linq to filter a ComboBox.DataSource ?

    - by Pesche Helfer
    Hi board, in another topic, I've stumbled over this very elegant solution by Darin Dimitrov to filter the DataSource of one ComboBox with the selection of another ComboBox: how to filter combobox in combobox using c# combo2.DataSource = ((IEnumerable<string>)c.DataSource) .Where(x => x == (string)combo1.SelectedValue); I would like to do a similar thing, but intead of filtering by a second combobox, I would like to filter by the text of a TextBox. (Basically, instead of choosing from a second ComboBox, the user simply enters his filter in to a TextBox). However, it turned out to be not as straight forward as I had hoped it would be. I tried stuff as the following, but failed miserably: cbWohndresse.DataSource = ((IEnumerable<DataSet>)ds) .Where(x => x.Tables["Adresse"].Select("AdrLabel LIKE '%TEST%'")); cbWohndresse.DisplayMember = "Adresse.AdrLabel"; cbWohndresse.ValueMember = "Adresse.adress_id"; ds is the DataSet which I would like to use as filtered DataSource. "Adresse" is one DataTable in this DataSet. It contains a DataColumn "AdrLabel". Now I would like to display only those "AdrLabel", which contain the string from the user input. (Currently, %TEST% replaces the textbox.text.) The above code fails because the lambda expression does not return Bool. But I am sure, there are also other problems (which type should I use for IEnumerable? Now it's DataSet, but Darin used String. But how could I convert a DataSet to a string? Yes, I am as much newbyish as it gets, my experience is "void", and publicly so. So please forgive me my rather stupid questions. Your help is greatly appreciated, because I can't solve this on my own (tried hard already). Thank you very much! Pesche P.S. I am only using Linq to achieve an uncomplicated filter for the ComboBox (avoiding a View). The rest is not based on Linq, but on oldstyle Ado.NET (ds is filled by an SqlDataAdapter), if that's of any importance.

    Read the article

  • Visual Web Developer 2008 Express and XHTML 1.1 (not applying)

    - by Mike Valeriano
    Hello. I may be missing something since I'm not used to work with IDEs for web development, so please understand if I'm doing something stupid. I've picked a copy of Beginning ASP.NET 3.5 in C# and VB to try and learn this thing quickly. I'm no HTML guru, but I know something about the DTDs and I want to either use XHTML 1.0 Strict or XHTML 1.1 while learning the ins and outs of ASP.NET. Following the book (just started actually), I'm not having any trouble understanding the concepts, since I have a C# background, but what I don't understand is how VWD goes about applying the schema you select for validation. The book explains that the drop-down list on the toolbar is enough to set this, so that's what I did: I've selected XHTML 1.1 (since there is no 1.0 strict option, what I find really odd) and then started the project. The thing is that the code generated automatically for the Default page had the XHTML 1.0 transitional DTD. Every other page added had the same DTD, even though XHTML 1.1 is still the selected schema in the drop-down list. I decided to test it out and there it was: inserting some text in the design view and then applying a bold formatting just inserted < b tags on the code. Changing color does add a span tag with added CSS though. What I want to know is: if I want strict XHTML (or 1.1) should I just code it manually? Or is there a "fix" for this problem (having a schema selected but not applied by the IDE)? Am I missing something really easy and dumb? I have no problem with coding it manually - I actually prefer doing that. The thing is that I really wanna try this WYSIWYG approach for once, since some developers swear by it. Plus I don't know yet if I'll need special tags for ASP.NET, so I think having them inserted automatically should be the best practice. I couldn't find any similar problems around (MSDN, Google, here), and as far as the book goes (up to the point I've read), this is the expected VWD behavior, which I find suspect at least. Sorry for the long text (and poor English - not my primary language). Thanks in advance for any help/tips.

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

  • decrypt an encrypted value ?

    - by jim
    I have an old Paradox database (I can convert it to Access 2007) which contains more then 200,000 records. This database has two columns: the first one is named "Word" and the second one is named "Mean". It is a dictionary database and my client wants to convert this old database to ASP.NET and SQL. However, we don't know what key or method is used to encrypt or encode the "Mean" column which is in the Unicode format. The software itself has been written in Delphi 7 and we don't have the source code. My client only knows the credentials for logging in to database. The problem is decoding the Mean column. What I do have is the compiled windows application and the Paradox database. This software can decode the "Mean" column for each "Word" so the method and/or key is in its own compiled code(.exe) or one of the files in its directory. For example, we know that in the following row the "Zymurgy" exactly means "???? ??? ????? ?? ???? ????, ????? ?????" since the application translates it like that. Here is what the record looks like when I open the database in Access: Word Mean Zymurgy 5OBnGguKPdDAd7L2lnvd9Lnf1mdd2zDBQRxngsCuirK5h91sVmy0kpRcue/+ql9ORmP99Mn/QZ4= Therefore we're trying to discover how the value in the Mean column is converted to "???? ??? ????? ?? ???? ????, ????? ?????". I think the "Mean" column value in above row is encoded in Base64 string format, but decoding the Base64 string does not yet result in the expected text. The extensions for files in the win app directory are dll, CCC, DAT, exe (other than the main app file), SYS, FAM, MB, PX, TV, VAL. Any kind of help is appreciated. Additional information: guys! the creators are not that stupid to save the values only in encoded form. they're definitely encrypted them. so i guess we have to look for the key.

    Read the article

  • List of objects sent over WCF, but null list received?

    - by GONeale
    Hey there, I have an object containing a list of custom objects which I am returning over a response in WCF, however on the receiving end, the list is null? But it contains 112 objects just prior to stepping out of the service on the server. This wasn't always the case, I have seen it return a list. I've just recently upgraded it to use NET TCP bindings, but I can't confirm when I started losing the data or if it was since the conversion from wsHttpBinding to netTcpBinding as it moved along with about four other services. I have looked on the WCF Service messages and trace file and also the WCF client's messages and trace file, no exceptions reported, and both message logs indicate they are sending the List<T> and for client, receiving the list - very frustrating! It's not a super light array, but not huge either, around 100KB. it has about 12 properties each and as stated 112 items are being sent. I have tried everything I can think of on client and server, note: Client: this.binding = new NetTcpBinding(SecurityMode.None) { MaxReceivedMessageSize = int.MaxValue, ReaderQuotas = { MaxStringContentLength = int.MaxValue, MaxArrayLength = int.MaxValue } }; ... Server app.config (sorry I have no idea if the quota settings have any bearing on net tcp? I only just added it similar to what I use for wsHttpBinding to test, but still list is null): <netTcpBinding> <binding name="SecurityByNetTcpTransportBinding" sendTimeout="00:03:00" maxReceivedMessageSize="2147483647"> <readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" /> <security mode="None" /> </binding> </netTcpBinding> and something else I tried in my net tcp binding behavior: <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceTimeouts transactionTimeout="05:05:00" /> <serviceThrottling maxConcurrentSessions="400" maxConcurrentInstances="400" maxConcurrentCalls="400"/> I hope somebody can help, I hate 5 steps forward 3 steps backward which always seems to be the case with WCF :P In the interim until I [hopefully] get a response I will now try reducing this array just to see if it's a sizing issue.. Ok, It seems I have bigger problems. Because the list was the only thing I was sending, I thought it was an array issue. I am even setting an int to "25" and it's coming back as 0 - Anybody? I know I must have done something obviously stupid.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71  | Next Page >