Search Results

Search found 1594 results on 64 pages for 'stupid medankia'.

Page 58/64 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • ASP.NET 3.5 GridView - row editing - dynamic binding to a DropDownList

    - by marc_s
    This is driving me crazy :-) I'm trying to get a ASP.NET 3.5 GridView to show a selected value as string when being displayed, and to show a DropDownList to allow me to pick a value from a given list of options when being edited. Seems simple enough? My gridview looks like this (simplified): <asp:GridView ID="grvSecondaryLocations" runat="server" DataKeyNames="ID" OnInit="grvSecondaryLocations_Init" OnRowCommand="grvSecondaryLocations_RowCommand" OnRowCancelingEdit="grvSecondaryLocations_RowCancelingEdit" OnRowDeleting="grvSecondaryLocations_RowDeleting" OnRowEditing="grvSecondaryLocations_RowEditing" OnRowUpdating="grvSecondaryLocations_RowUpdating" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:Label ID="lblPbxTypeCaption" runat="server" Text='<%# Eval("PBXTypeCaptionValue") %>' /> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="ddlPBXTypeNS" runat="server" Width="200px" DataTextField="CaptionValue" DataValueField="OID" /> </EditItemTemplate> </asp:TemplateField> </asp:GridView> The grid gets displayed OK when not in editing mode - the selected PBX type shows its value in the asp:Label control. No surprise there. I load the list of values for the DropDownList into a local member called _pbxTypes in the OnLoad event of the form. I verified this - it works, the values are there. Now my challenge is: when the grid goes into editing mode for a particular row, I need to bind the list of PBX's stored in _pbxTypes. Simple enough, I thought - just grab the drop down list object in the RowEditing event and attach the list: protected void grvSecondaryLocations_RowEditing(object sender, GridViewEditEventArgs e) { grvSecondaryLocations.EditIndex = e.NewEditIndex; GridViewRow editingRow = grvSecondaryLocations.Rows[e.NewEditIndex]; DropDownList ddlPbx = (editingRow.FindControl("ddlPBXTypeNS") as DropDownList); if (ddlPbx != null) { ddlPbx.DataSource = _pbxTypes; ddlPbx.DataBind(); } .... (more stuff) } Trouble is - I never get anything back from the FindControl call - seems like the ddlPBXTypeNS doesn't exist (or can't be found). What am I missing?? Must be something really stupid.... but so far, all my Googling, reading up on GridView controls, and asking buddies hasn't helped. Who can spot the missing link? ;-) Marc

    Read the article

  • MIPS: removing non alpha-numeric characters from a string

    - by Kron
    I'm in the process of writing a program in MIPS that will determine whether or not a user entered string is a palindrome. It has three subroutines which are under construction. Here is the main block of code, subroutines to follow with relevant info: .data Buffer: .asciiz " " # 80 bytes in Buffer intro: .asciiz "Hello, please enter a string of up to 80 characters. I will then tell you if that string was a palindrome!" .text main: li $v0, 4 # print_string call number la $a0, intro # pointer to string in memory syscall li $v0, 8 #syscall code for reading string la $a0, Buffer #save read string into buffer li $a1, 80 #string is 80 bytes long syscall li $s0, 0 #i = 0 li $t0, 80 #max for i to reach la $a0, Buffer jal stripNonAlpha li $v0, 4 # print_string call number la $a0, Buffer # pointer to string in memory syscall li $s0, 0 jal findEnd jal toUpperCase li $v0, 4 # print_string call number la $a0, Buffer # pointer to string in memory syscall Firstly, it's supposed to remove all non alpha-numeric characters from the string before hand, but when it encounters a character designated for removal, all characters after that are removed. stripNonAlpha: beq $s0, $t0, stripEnd #if i = 80 end add $t4, $s0, $a0 #address of Buffer[i] in $t4 lb $s1, 0($t4) #load value of Buffer[i] addi $s0, $s0, 1 #i = i + 1 slti $t1, $s1, 48 #if ascii code is less than 48 bne $t1, $zero, strip #remove ascii character slti $t1, $s1, 58 #if ascii code is greater than 57 #and slti $t2, $s1, 65 #if ascii code is less than 65 slt $t3, $t1, $t2 bne $t3, $zero, strip #remove ascii character slti $t1, $s1, 91 #if ascii code is greater than 90 #and slti $t2, $s1, 97 #if ascii code is less than 97 slt $t3, $t1, $t2 bne $t3, $zero, strip #remove ascii character slti $t1, $s1, 123 #if ascii character is greater than 122 beq $t1, $zero, strip #remove ascii character j stripNonAlpha #go to stripNonAlpha strip: #add $t5, $s0, $a0 #address of Buffer[i] in $t5 sb $0, 0($t4) #Buffer[i] = 0 #addi $s0, $s0, 1 #i = i + 1 j stripNonAlpha #go to stripNonAlpha stripEnd: la $a0, Buffer #save modified string into buffer jr $ra #return Secondly, it is supposed to convert all lowercase characters to uppercase. toUpperCase: beq $s0, $s2, upperEnd add $t4, $s0, $a0 lb $s1, 0($t4) addi $s1, $s1, 1 slti $t1, $s1, 97 #beq $t1, $zero, upper slti $t2, $s1, 123 slt $t3, $t1, $t2 bne $t1, $zero, upper j toUpperCase upper: add $t5, $s0, $a0 addi $t6, $t6, -32 sb $t6, 0($t5) j toUpperCase upperEnd: la $a0, Buffer jr $ra The final subroutine, which checks if the string is a palindrome isn't anywhere near complete at the moment. I'm having trouble finding the end of the string because I'm not sure what PC-SPIM uses as the carriage return character. Any help is appreciated, I have the feeling most of my problems result from something silly and stupid so feel free to point out anything, no matter how small.

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • Detecting 'stealth' web-crawlers

    - by Jacco
    What options are there to detect web-crawlers that do not want to be detected? (I know that listing detection techniques will allow the smart stealth-crawler programmer to make a better spider, but I do not think that we will ever be able to block smart stealth-crawlers anyway, only the ones that make mistakes.) I'm not talking about the nice crawlers such as googlebot and Yahoo! Slurp. I consider a bot nice if it: identifies itself as a bot in the user agent string reads robots.txt (and obeys it) I'm talking about the bad crawlers, hiding behind common user agents, using my bandwidth and never giving me anything in return. There are some trapdoors that can be constructed updated list (thanks Chris, gs): Adding a directory only listed (marked as disallow) in the robots.txt, Adding invisible links (possibly marked as rel="nofollow"?), style="display: none;" on link or parent container placed underneath another element with higher z-index detect who doesn't understand CaPiTaLiSaTioN, detect who tries to post replies but always fail the Captcha. detect GET requests to POST-only resources detect interval between requests detect order of pages requested detect who (consistently) requests https resources over http detect who does not request image file (this in combination with a list of user-agents of known image capable browsers works surprisingly nice) Some traps would be triggered by both 'good' and 'bad' bots. you could combine those with a whitelist: It trigger a trap It request robots.txt? It doest not trigger another trap because it obeyed robots.txt One other important thing here is: Please consider blind people using a screen readers: give people a way to contact you, or solve a (non-image) Captcha to continue browsing. What methods are there to automatically detect the web crawlers trying to mask themselves as normal human visitors. Update The question is not: How do I catch every crawler. The question is: How can I maximize the chance of detecting a crawler. Some spiders are really good, and actually parse and understand html, xhtml, css javascript, VB script etc... I have no illusions: I won't be able to beat them. You would however be surprised how stupid some crawlers are. With the best example of stupidity (in my opinion) being: cast all URLs to lower case before requesting them. And then there is a whole bunch of crawlers that are just 'not good enough' to avoid the various trapdoors.

    Read the article

  • Team Foundation Server Setup/Access

    - by Angel Brighteyes
    What I need: A TFS 2010 Setup that allows 2 application developers to access the TFS from remote locations. How it is setup: Server 2008 Standard 2g Ram 300g HD space SharePoint Server 2007, using SQL Server 2005 SQL Server 2008 Standard Team Foundation Server 2010 IIS 7 Sharepoint Bindings: TFS.DynAccount.Me:80; TFS:80 TFS Bindings: TFS.DynAccount.Me:8080; TFS:8080 Using DynDNS service to account for the dynamic ip address being used, this is a requirement for the moment until I can get a better isp package. Access using Local Accounts Server is not setup on a domain, or as a domain. Consequently I did not setup AD services. Problem: When logged into TFS using my credentials TFS\AdminUser through the DynDNS account TFS.DynAccount.Me I recieve the 'Red X of Death' on the Documents and Reports folder. When logged into the TFS through the local peer to peer network using the same credentials TFS\AdminUser I do not receive the 'Red X of Death' problem. Further Troubleshooting: When users 'Right Click' the 'TeamProject1' Click 'Show Project Portal' it tries to take them to http://TFS:8080 instead of http://TFS.DynAccount.Me:8080, which doing further research I am assuming that it is because team foundation server was setup with a local name of TFS instead of 'TFS.DynAccount.Me' as specified here in Visual Studio Magazines: The Red X of Death. Users can Access the Team Portal for SharePoint via http://TFS.DynAccount.Me/TeamCollection/TeamProject so it is not like we are dead in the water or anything. However, as most employees/staff are prone to do, they have expressed a great distaste for having to do it this way and just be patient until the current project is finished since we are under a very strict deadline. Is there a way to set this up differently, or change some settings someplace, reinstall it, point a CName record for our domain website to the DynAccount (e.g. TFS.OurDomain.com points to TFS.DynAccount.Me, which consequently does allow access to the http site without issues), or something. I really don't feel like after all the time and effort I have spent into, first the cost, second the bloody install, third learning SharePoint well enough, fourth the hours into days spent on this, fifth more troubleshooting, sixth employee headaches to just let it lay where it is at. I figure in my spare/off time I would keep trying to get this to work. So I really appreciate any help any one can give me. I know this is probably something really stupid simple that I will 'Face Palm' over, but at the moment the stress and frustration just has me beat. Thank you again, this community has always been a great help.

    Read the article

  • jQuery UI portlets - toggle portlets to save to a cookie (half way there!)

    - by Gareth
    Hi, I'm a bit of a jQuery n00b so please excuse me if this seems like a stupid question. I am creating a site using the jQuery UI more specifically the sortable portlets. I have been able store whether or not a portlet is has been open or closed to a cookie. This is done using the following code. The slider ID is currently where the controls are stored to turn each portlet on and off. var cookie = $.cookie("hidden"); var hidden = cookie ? cookie.split("|").getUnique() : []; var cookieExpires = 7; // cookie expires in 7 days, or set this as a date object to specify a date // Remember content that was hidden $.each( hidden, function(){ var pid = this; //parseInt(this,10); $('#' + pid).hide(); $("#slider div[name='" + pid + "']").addClass('add'); }) // Add Click functionality $("#slider div").click(function(){ $(this).toggleClass('add'); var el = $("div#" + $(this).attr('name')); el.toggle(); updateCookie(el); }); $('a.toggle').click(function(){ $(this).parents(".portlet").hide(); // *** Below line just needs to select the correct 'id' and insert as selector i.e ('#slider div#block-1') and then update cookie! *** $('#slider div').addClass('add'); }); // Update the Cookie function updateCookie(el){ var indx = el.attr('id'); var tmp = hidden.getUnique(); if (el.is(':hidden')) { // add index of widget to hidden list tmp.push(indx); } else { // remove element id from the list tmp.splice( tmp.indexOf(indx) , 1); } hidden = tmp.getUnique(); $.cookie("hidden", hidden.join('|'), { expires: cookieExpires } ); } }) // Return a unique array. Array.prototype.getUnique = function() { var o = new Object(); var i, e; for (i = 0; e = this[i]; i++) {o[e] = 1}; var a = new Array(); for (e in o) {a.push (e)}; return a; } What I would like to do is also add a [x] into the corner of each portlet to give the user another way of hiding it but I'm unable to currently get this to store within the Cookie using the code above. Can anyone give me a pointer of how I would do this? Thanks in advance! Gareth

    Read the article

  • SQLiteOpenHelper.getWriteableDatabase() null pointer exception on Android

    - by Drew Dara-Abrams
    I've had fine luck using SQLite with straight, direct SQL in Android, but this is the first time I'm wrapping a DB in a ContentProvider. I keep getting a null pointer exception when calling getWritableDatabase() or getReadableDatabase(). Is this just a stupid mistake I've made with initializations in my code or is there a bigger issue? public class DatabaseProvider extends ContentProvider { ... private DatabaseHelper databaseHelper; private SQLiteDatabase db; ... @Override public boolean onCreate() { databaseHelper = new DatabaseProvider.DatabaseHelper(getContext()); return (databaseHelper == null) ? false : true; } ... @Override public Uri insert(Uri uri, ContentValues values) { db = databaseHelper.getWritableDatabase(); // NULL POINTER EXCEPTION HERE ... } private static class DatabaseHelper extends SQLiteOpenHelper { public static final String DATABASE_NAME = "cogsurv.db"; public static final int DATABASE_VERSION = 1; public static final String[] TABLES = { "people", "travel_logs", "travel_fixes", "landmarks", "landmark_visits", "direction_distance_estimates" }; // people._id does not AUTOINCREMENT, because it's set based on server's people.id public static final String[] CREATE_TABLE_SQL = { "CREATE TABLE people (_id INTEGER PRIMARY KEY," + "server_id INTEGER," + "name VARCHAR(255)," + "email_address VARCHAR(255))", "CREATE TABLE travel_logs (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "start DATE," + "stop DATE," + "type VARCHAR(15)," + "uploaded VARCHAR(1))", "CREATE TABLE travel_fixes (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "datetime DATE, " + "latitude DOUBLE, " + "longitude DOUBLE, " + "altitude DOUBLE," + "speed DOUBLE," + "accuracy DOUBLE," + "travel_mode VARCHAR(50), " + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmarks (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "name VARCHAR(150)," + "latitude DOUBLE," + "longitude DOUBLE," + "person_local_id INTEGER," + "person_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmark_visits (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "landmark_local_id INTEGER," + "landmark_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "datetime DATE," + "number_of_questions_asked INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE direction_distance_estimates (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "landmark_visit_local_id INTEGER," + "landmark_visit_server_id INTEGER," + "start_landmark_local_id INTEGER," + "start_landmark_server_id INTEGER," + "target_landmark_local_id INTEGER," + "target_landmark_server_id INTEGER," + "datetime DATE," + "direction_estimate DOUBLE," + "distance_estimate DOUBLE," + "uploaded VARCHAR(1))" }; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); Log.v(Constants.TAG, "DatabaseHelper()"); } @Override public void onCreate(SQLiteDatabase db) { Log.v(Constants.TAG, "DatabaseHelper.onCreate() starting"); // create the tables int length = CREATE_TABLE_SQL.length; for (int i = 0; i < length; i++) { db.execSQL(CREATE_TABLE_SQL[i]); } Log.v(Constants.TAG, "DatabaseHelper.onCreate() finished"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { for (String tableName : TABLES) { db.execSQL("DROP TABLE IF EXISTS" + tableName); } onCreate(db); } } } As always, thanks for the assistance! -- Not sure if this detail helps, but here's LogCat showing the exception:

    Read the article

  • UITableView having one Cell with multiple UITextFields (some of them in one row) scrolling crazy

    - by Allisone
    I have a UITableView (grouped). In that tableview I have several UITableViewCells, some custom with nib, some default. One Cell (custom) with nib has several UITextfields for address information, thus also one row has zip-code and city in one row. When I get the keyboard the tableview size seems to be adjusted automatically (vs. another viewController in the app with just a scrollview where I had to code this functionality on my own) so that i can scroll to the bottom of my tableview (and see it) even though the keyboard is up. That's good. BUT when I click on a textfield the tableview gets either scrolled up, or down, I can't figure out the logic. It seems to be rather random up/down scrolling / contentOffset setting. So I have bound the Editing Did Begin events of the textfields to a function that has this code. - (IBAction)textFieldDidBeginEditing:(UITextField *)textField { CGPoint pt; CGRect rc = [textField bounds]; rc = [textField convertRect:rc toView:self.tableView]; pt = rc.origin; pt.x = 0; [self.tableView setContentOffset:pt animated:YES]; ... } This, well, it seems to work most of the time, BUT it doesn't work if I click the first textfield (the view jumps so that the second row gets to the top and the first row is out of the current visible view frame) AND it also doesn't work if I first select the zip textfield and next the city textfield (both in one row) or vice versa. If I do so, the tableview seems to jump to the (grouped tableview) top of my viewForHeaderInSection(this section with this mentioned cell with all my textfields) What is is going on ? Why is this happening ? How to fix this ? Edit This on the other hand behaves as expected (for the two Textviews wit same origin.y) if (self.tableView.contentOffset.y == pt.y) { pt.y = pt.y + 1; [self.tableView setContentOffset:pt animated:YES]; }else { [self.tableView setContentOffset:pt animated:YES]; } But this is a stupid solution. I wouldn't like to keep it that way. And this also doesn't fix the wrong jumping, when clicking the first textfield at first.

    Read the article

  • html widget communicating with server

    - by Nikita Rybak
    I'm making html widget for websites. Let's say, it will display current stock indexes. In short, arbitrary website owner takes code snippet from me and includes it on his webpage http://website.com/index.html. When arbitrary user opens http://website.com/index.html, my code sends request to my server (provider.com), which performs necessary operations and returns information to user's browser. When response has arrived, user will see relevant stock value on http://website.com/index.html. In index.html service could be called like this <script type="text/javascript" src="provider.com/service.js"> </script> <div id="target_area"></div> <script type="text/javascript"> service.show("target_area", options); </script> Now, the problem is in the same origin policy: I can't just send ajax request from website.com to provided.com and return html to embed in client's webpage. I see several solutions, which I list below, but none quite satisfy me. I wonder, if you could suggest something, especially if you had some relevant experience. 1) iframe, plain and simple. Disadvantage: must have fixed dimensions + stupid scroll bars appearing in some browsers. Can be fixed with javascript, but all this browser-specific tinkering doesn't sound good to me. 2) JSONP. Problem: can't return whole chunk of html, must return only data. Then, on browser side, I'll have to use javascript to embed data into html snippet placed statically in index.html. Doesn't sound nice, because data format is not very simple and may even change later. 3) Use hidden iframe to do ajax requests. A bit tricky, but sounds like a way to go. Well, that's my thoughts on the subject. Are there any better ways? BTW, I tried to check some existing widgets too, but didn't find much useful information. All domain names used in this text are fictional and any resemblance is purely coincidental :)

    Read the article

  • Using Linq to filter a ComboBox.DataSource ?

    - by Pesche Helfer
    Hi board, in another topic, I've stumbled over this very elegant solution by Darin Dimitrov to filter the DataSource of one ComboBox with the selection of another ComboBox: how to filter combobox in combobox using c# combo2.DataSource = ((IEnumerable<string>)c.DataSource) .Where(x => x == (string)combo1.SelectedValue); I would like to do a similar thing, but intead of filtering by a second combobox, I would like to filter by the text of a TextBox. (Basically, instead of choosing from a second ComboBox, the user simply enters his filter in to a TextBox). However, it turned out to be not as straight forward as I had hoped it would be. I tried stuff as the following, but failed miserably: cbWohndresse.DataSource = ((IEnumerable<DataSet>)ds) .Where(x => x.Tables["Adresse"].Select("AdrLabel LIKE '%TEST%'")); cbWohndresse.DisplayMember = "Adresse.AdrLabel"; cbWohndresse.ValueMember = "Adresse.adress_id"; ds is the DataSet which I would like to use as filtered DataSource. "Adresse" is one DataTable in this DataSet. It contains a DataColumn "AdrLabel". Now I would like to display only those "AdrLabel", which contain the string from the user input. (Currently, %TEST% replaces the textbox.text.) The above code fails because the lambda expression does not return Bool. But I am sure, there are also other problems (which type should I use for IEnumerable? Now it's DataSet, but Darin used String. But how could I convert a DataSet to a string? Yes, I am as much newbyish as it gets, my experience is "void", and publicly so. So please forgive me my rather stupid questions. Your help is greatly appreciated, because I can't solve this on my own (tried hard already). Thank you very much! Pesche P.S. I am only using Linq to achieve an uncomplicated filter for the ComboBox (avoiding a View). The rest is not based on Linq, but on oldstyle Ado.NET (ds is filled by an SqlDataAdapter), if that's of any importance.

    Read the article

  • Visual Web Developer 2008 Express and XHTML 1.1 (not applying)

    - by Mike Valeriano
    Hello. I may be missing something since I'm not used to work with IDEs for web development, so please understand if I'm doing something stupid. I've picked a copy of Beginning ASP.NET 3.5 in C# and VB to try and learn this thing quickly. I'm no HTML guru, but I know something about the DTDs and I want to either use XHTML 1.0 Strict or XHTML 1.1 while learning the ins and outs of ASP.NET. Following the book (just started actually), I'm not having any trouble understanding the concepts, since I have a C# background, but what I don't understand is how VWD goes about applying the schema you select for validation. The book explains that the drop-down list on the toolbar is enough to set this, so that's what I did: I've selected XHTML 1.1 (since there is no 1.0 strict option, what I find really odd) and then started the project. The thing is that the code generated automatically for the Default page had the XHTML 1.0 transitional DTD. Every other page added had the same DTD, even though XHTML 1.1 is still the selected schema in the drop-down list. I decided to test it out and there it was: inserting some text in the design view and then applying a bold formatting just inserted < b tags on the code. Changing color does add a span tag with added CSS though. What I want to know is: if I want strict XHTML (or 1.1) should I just code it manually? Or is there a "fix" for this problem (having a schema selected but not applied by the IDE)? Am I missing something really easy and dumb? I have no problem with coding it manually - I actually prefer doing that. The thing is that I really wanna try this WYSIWYG approach for once, since some developers swear by it. Plus I don't know yet if I'll need special tags for ASP.NET, so I think having them inserted automatically should be the best practice. I couldn't find any similar problems around (MSDN, Google, here), and as far as the book goes (up to the point I've read), this is the expected VWD behavior, which I find suspect at least. Sorry for the long text (and poor English - not my primary language). Thanks in advance for any help/tips.

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

  • decrypt an encrypted value ?

    - by jim
    I have an old Paradox database (I can convert it to Access 2007) which contains more then 200,000 records. This database has two columns: the first one is named "Word" and the second one is named "Mean". It is a dictionary database and my client wants to convert this old database to ASP.NET and SQL. However, we don't know what key or method is used to encrypt or encode the "Mean" column which is in the Unicode format. The software itself has been written in Delphi 7 and we don't have the source code. My client only knows the credentials for logging in to database. The problem is decoding the Mean column. What I do have is the compiled windows application and the Paradox database. This software can decode the "Mean" column for each "Word" so the method and/or key is in its own compiled code(.exe) or one of the files in its directory. For example, we know that in the following row the "Zymurgy" exactly means "???? ??? ????? ?? ???? ????, ????? ?????" since the application translates it like that. Here is what the record looks like when I open the database in Access: Word Mean Zymurgy 5OBnGguKPdDAd7L2lnvd9Lnf1mdd2zDBQRxngsCuirK5h91sVmy0kpRcue/+ql9ORmP99Mn/QZ4= Therefore we're trying to discover how the value in the Mean column is converted to "???? ??? ????? ?? ???? ????, ????? ?????". I think the "Mean" column value in above row is encoded in Base64 string format, but decoding the Base64 string does not yet result in the expected text. The extensions for files in the win app directory are dll, CCC, DAT, exe (other than the main app file), SYS, FAM, MB, PX, TV, VAL. Any kind of help is appreciated. Additional information: guys! the creators are not that stupid to save the values only in encoded form. they're definitely encrypted them. so i guess we have to look for the key.

    Read the article

  • List of objects sent over WCF, but null list received?

    - by GONeale
    Hey there, I have an object containing a list of custom objects which I am returning over a response in WCF, however on the receiving end, the list is null? But it contains 112 objects just prior to stepping out of the service on the server. This wasn't always the case, I have seen it return a list. I've just recently upgraded it to use NET TCP bindings, but I can't confirm when I started losing the data or if it was since the conversion from wsHttpBinding to netTcpBinding as it moved along with about four other services. I have looked on the WCF Service messages and trace file and also the WCF client's messages and trace file, no exceptions reported, and both message logs indicate they are sending the List<T> and for client, receiving the list - very frustrating! It's not a super light array, but not huge either, around 100KB. it has about 12 properties each and as stated 112 items are being sent. I have tried everything I can think of on client and server, note: Client: this.binding = new NetTcpBinding(SecurityMode.None) { MaxReceivedMessageSize = int.MaxValue, ReaderQuotas = { MaxStringContentLength = int.MaxValue, MaxArrayLength = int.MaxValue } }; ... Server app.config (sorry I have no idea if the quota settings have any bearing on net tcp? I only just added it similar to what I use for wsHttpBinding to test, but still list is null): <netTcpBinding> <binding name="SecurityByNetTcpTransportBinding" sendTimeout="00:03:00" maxReceivedMessageSize="2147483647"> <readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" /> <security mode="None" /> </binding> </netTcpBinding> and something else I tried in my net tcp binding behavior: <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceTimeouts transactionTimeout="05:05:00" /> <serviceThrottling maxConcurrentSessions="400" maxConcurrentInstances="400" maxConcurrentCalls="400"/> I hope somebody can help, I hate 5 steps forward 3 steps backward which always seems to be the case with WCF :P In the interim until I [hopefully] get a response I will now try reducing this array just to see if it's a sizing issue.. Ok, It seems I have bigger problems. Because the list was the only thing I was sending, I thought it was an array issue. I am even setting an int to "25" and it's coming back as 0 - Anybody? I know I must have done something obviously stupid.

    Read the article

  • Enable real fixed positioning on Samsung Android browsers

    - by Mr. Shiny and New ??
    The Android browser, since 2.2, supports fixed positioning, at least under certain circumstances such as when scaling is turned off. I have a simple HTML file with no JS, but the fixed positioning on three Samsung phones I've tried is simply wrong. Instead of true fixed positioning, the header scrolls out of view then pops back into place after the scrolling is done. This doesn't happen on the Android SDK emulator for any configuration I've tested (2.2, 2.3, 2.3 x86, 4.0.4). It also doesn't happen when using the WebView in an app on the Samsung phones: in those cases the positioning works as expected. Is there a way to make the Samsung Android "stock" browser use real fixed positioning? I've tested: 1. Samsung Galaxy 551, Android 2.2 2. Samsung Galaxy S, Android 2.3 3. Samsung Galaxy S II, Android 2.3 Sample code: <html> <head> <meta name="viewport" content="initial-scale=1.0,maximum-scale=1.0,user-scalable=no,width=device-width,height=device-height"> <style> h1 { position: fixed; top: 0; left: 0; height: 32px; background-color: #CDCDCD; color: black; font-size: 32px; line-height: 32px; padding: 2px; width: 100%; margin: 0;} p { margin-top: 36px; } </style> </head> <body> <h1>Header</h1> <p>Long text goes here</p> </body> </html> The expected behaviour is that the grey header fills the top of the screen and stays put no matter how much you scroll. On Samsung Android browsers it seems to scroll out of view then pop back into place once the scrolling is done, as if the fixed-positioning is being simulated using Javascript, which it isn't. Edit Judging by the comments and "answers" it seems that maybe I wasn't clear on what I need. I am looking for a meta tag or css rule/hack or javascript toggle which turns off Samsung's broken fixed-positioning and turns on the Android browser's working fixed-positioning. I am not looking for a Javascript solution that adds broken fixed-positioning to a browser that has no support whatsoever; the Samsung fixed-positioning does that already, it just looks stupid.

    Read the article

  • NSObject release destroys local copy of object's data

    - by Spider-Paddy
    I know this is something stupid on my part but I don't get what's happening. I create an object that fetches data & puts it into an array in a specific format, since it fetches asynchronously (has to download & parse data) I put a delegate method into the object that needs the data so that the data fetching object copies it's formatted array into an array in the calling object. The problem is that when the data fetching object is released, the copy it created in the caller is being erased, code is: In .h file @property (nonatomic, retain) NSArray *imagesDataSource; In .m file // Fetch item details ImagesParser *imagesParserObject = [[ImagesParser alloc] init:self]; [imagesParserObject getArticleImagesOfArticleId:(NSInteger)currentArticleId]; [imagesParserObject release] <-- problematic release // Called by parser when images parsing is finished -(void)imagesDataTransferComplete:(ImagesParser *)imagesParserObject { self.imagesDataSource = [ImagesParserObject.returnedArray copy]; // copy array to local variable // If there are more pics, they must be assembled in an array for possible UIImageView animation NSInteger picCount = [imagesDataSource count]; if(picCount > 1) // 1 image is assumed to be the pic already displayed { // Build image array NSMutableArray *tempPicArray = [[NSMutableArray alloc] init]; // Temp space to hold images while building for(int i = 0; i < picCount; i++) { // Get Nr from only article in detailDataSource & pic name (Small) from each item in imagesDataSource NSString *picAddress = [NSString stringWithFormat:@"http://some.url.com/shopdata/image/article/%@/%@", [[detailDataSource objectAtIndex:0] objectForKey:@"Nr"], [[imagesDataSource objectAtIndex:i] objectForKey:@"Small"]]; NSURL *picURL = [NSURL URLWithString:picAddress]; NSData *picData = [NSData dataWithContentsOfURL:picURL]; [tempPicArray addObject:[UIImage imageWithData:picData]]; } imagesArray = [tempPicArray copy]; // copy makes immutable copy of array [tempPicArray release]; currentPicIndex = 0; // Assume first pic is pic already being shown } else imagesArray = nil; // No need for a needless pic array // Remove please wait message [pleaseWaitViewControllerObject.view removeFromSuperview]; } I put in tons of NSLog lines to keep track of what was going on & self.imagesDataSource is populated with the returned array but when the parser object is released self.imagesDataSource becomes empty. I thought self.imagesDataSource = [ImagesParserObject.returnedArray copy]; is supposed to make an independant object, like as if it was alloc, init'ed, so that self.imagesDataSource is not just a pointer to the parser's array but is it's own array. So why does the release of the parser object clear the copy of the array. (I checked & double checked that it's not something overwriting self.imagesDataSource, commenting out [imagesParserObject release] consistently fixes the problem) Also, I have exactly the same problem with self.detailDataSource which is declared & populated in the exact same way as self.imagesDataSource I thought that once I call the parser I could release it because the caller no longer needs to refer to it, all further activity is carried out by the parser object through it's delegate method, what am I doing wrong?

    Read the article

  • How to bind a WPF CustomControl to a ListBox

    - by VivyG
    I've just started to learn WPF this week so please accept my apologies if I am being stupid or missing fundamental points! Iam trying to build a very simple Contact browser. I have a Collection of Contact objects that displayed in a ListBox control which shows the FullName of the Contact and to the right I have a customControl called BasicContactCard. This is the XAML for the ContacWindow that displays the ListBox: <DockPanel Width="auto" Height="auto" Margin="8 8 8 8"> <Border Height="56" HorizontalAlignment="Stretch" VerticalAlignment="Top" BorderThickness="1" CornerRadius="8" DockPanel.Dock="Top" Background="Beige"> <TextBox Height="32" Margin="23,5,135,5" Text="Search for contact here" FontStyle="Italic" Foreground="#FFAD9595" FontSize="14" BorderBrush="LightGray"/> </Border> <ListBox x:Name="contactList" DockPanel.Dock="Left" Width="192" Height="auto" Margin="5 4 0 8" /> <Grid> <Grid.RowDefinitions> <RowDefinition Height="1*" /> <RowDefinition Height="0.125*" /> </Grid.RowDefinitions> <local:BasicContactCard Margin="8 8 8 8" /> <Button Grid.Row="1" x:Name="exit" Content="Exit" HorizontalAlignment="Right" Width="50" Height="25" Click="exit_Click" /> </Grid> </DockPanel> and this is the XAML for the CustomControl: <DockPanel Width="auto " Height="auto" Margin="8,8,8,8"> <Grid Width="auto" Height="auto" DockPanel.Dock="Top"> <Grid.RowDefinitions> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> </Grid.RowDefinitions> <TextBlock x:Name="companyField" Grid.Row="0" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="Company"/> <TextBlock x:Name="contactField" Grid.Row="1" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="Contact"/> <TextBlock x:Name="phoneField" Grid.Row="2" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="Phone"/> <TextBlock x:Name="emailField" Grid.Row="3" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="email"/> </Grid> </DockPanel> The problem I have is how do I bind the individual elements of the CustomControl to the object behind the SelectedItem in the ListBox?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64  | Next Page >