Search Results

Search found 15061 results on 603 pages for 'modified date'.

Page 171/603 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • How did my email end up in spam? Spam only filters this specific email, other email contents work

    - by mugetsu
    My website has users buy our products and when the purchase completes, it sends the user an email. However, this email always ends up in spam! When the user first registers, the site also sends an email, this email however is not filtered and goes into the normal inbox. I'm not quite sure why this is so, gmail vaguely tells me that " It's similar to messages that were detected by our spam filters." So I'm thinking that I need to reword the following email better. Can I get some tips? Or could something else be causing this? thanks! here's the unformatted email: Delivered-To: [email protected] Received: by 10.112.32.98 with SMTP id h2csp61953lbi; Tue, 20 Mar 2012 21:09:13 -0700 (PDT) Received: by 10.180.79.72 with SMTP id h8mr22836827wix.1.1332302953175; Tue, 20 Mar 2012 21:09:13 -0700 (PDT) Return-Path: <[email protected]> Received: from mail26.elasticemail.org (mail26.elasticemail.org. [178.32.180.26]) by mx.google.com with SMTP id 6si518487wiz.41.2012.03.20.21.09.12; Tue, 20 Mar 2012 21:09:12 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 178.32.180.26 as permitted sender) client-ip=178.32.180.26; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 178.32.180.26 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha1; bh=qjc8jxQuGy9pLN1YV9TR2PHQYKg=; c=relaxed/relaxed; d=website.com; s=api; h=DomainKey-Signature:MIME-Version:From:To:List-Unsubscribe:Subject:Date:Reply-To:Message-ID:Content-Type; b=Odt+nYhjntXPl7JPVHeJWjkStemt6so+FPVYY6oMKziMFzmW8YiLhN8WwSLY0faMcn/rirKsO2dOm/kvcHlqUJC7ldhaydE6bPekkBDa9kBovlGwPNm6xy9QWPP9I1fXDLDCwqqeAXv8kN0daXbh3pVyqWNUOk5cgQ35OgpQpKI= DomainKey-Signature: q=dns; a=rsa-sha1; c=simple; d=website.com; s=api; h=MIME-Version:X-Mailer:From:To:X-Priority:List-Unsubscribe:Subject:Date:Reply-To:Message-ID:Content-Type; b=F7NNZIEyEV+64uYD8pVpe91WLP19Tw2Whk4OKpkLeAfkmrNIA7AjP0XYU1JWTlEyibHQJjjbhR62I3MvVJBSGp75eWfOuwb2AqYWZ/jAlMWznnfQLVv7OlYJsErGxYP6GUNNcuJaqlTPFDanJwtaEvR+tqXZRB7xrUisMd8lq2I= MIME-Version: 1.0 X-Mailer: email.website.com From: "Website Contact" <[email protected]> To: [email protected] X-Priority: 3 (Normal) List-Unsubscribe: <http://email.website.com/tracking/unsubscribe?msgid=su6g-8kfd0s0g>, <mailto:[email protected]?subject=unsubscribe> Subject: Website Tickets: event Date: Wed, 21 Mar 2012 04:09:17 +0000 Reply-To: "Website Contact" <[email protected]> Message-ID: <[email protected]> Content-Type: multipart/alternative; boundary="----=_NextPart_000_3F77_7A0DF805.A8C886C0" ------=_NextPart_000_3F77_7A0DF805.A8C886C0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGVsbG8hIAoKIEhlcmUgYXJlIHlvdXIgdGlja2V0KHMpIGZvciBDVEFTIGVDc1RBU3kgMjAxMjog CgpodHRwczovL2NhbXB1c2FtcC5jb20vP3RpY2tldHMvNy95aGloZ3Znd3Z3cWR3cXhtdnQKClNp bXBseSBicmluZyBpdCB3aXRoIHlvdSBvbiB5b3VyIHNtYXJ0cGhvbmUsIG9yIHByaW50IHRoZSB0 aWNrZXQgb3V0IHRvIGJlIHNjYW5uZWQgYXQgdGhlIGV2ZW50LiBFbmpveSwgYW5kIHdlIGFwcHJl Y2lhdGUgeW91ciBwdXJjaGFzZS4KClNpbmNlcmVseSwKVGhlIENhbXB1c0FtcCBUZWFt ------=_NextPart_000_3F77_7A0DF805.A8C886C0 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 SGVsbG8hIDxici8+PGJyLz4gSGVyZSBhcmUgeW91ciB0aWNrZXQocykgZm9yIENUQVMgZUNzVEFT eSAyMDEyOjxici8+PGEgaHJlZj0iaHR0cDovL2VtYWlsLmNhbXB1c2FtcC5jb20vdHJhY2tpbmcv Y2xpY2s/bXNnaWQ9c3U2Zy04a2ZkMHMwZyZ0YXJnZXQ9aHR0cHMlM2ElMmYlMmZjYW1wdXNhbXAu Y29tJTJmJTNmdGlja2V0cyUyZjclMmZ5aGloZ3Znd3Z3cWR3cXhtdnQiPiBodHRwczovL2NhbXB1 c2FtcC5jb20vP3RpY2tldHMvNy95aGloZ3Znd3Z3cWR3cXhtdnQgIDwvYT4gPGJyLz48YnIvPlNp bXBseSBicmluZyBpdCB3aXRoIHlvdSBvbiB5b3VyIHNtYXJ0cGhvbmUsIG9yIHByaW50IHRoZSB0 aWNrZXQgb3V0IHRvIGJlIHNjYW5uZWQgYXQgdGhlIGV2ZW50LiBFbmpveSwgYW5kIHdlIGFwcHJl Y2lhdGUgeW91ciBwdXJjaGFzZS48YnIvPjxici8+U2luY2VyZWx5LDxici8+VGhlIENhbXB1c0Ft cCBUZWFtPGltZyBzcmM9Imh0dHA6Ly9lbWFpbC5jYW1wdXNhbXAuY29tL3RyYWNraW5nL29wZW4/ bXNnaWQ9c3U2Zy04a2ZkMHMwZyIgc3R5bGU9IndpZHRoOjFweDtoZWlnaHQ6MXB4IiBhbHQ9IiIg Lz4= ------=_NextPart_000_3F77_7A0DF805.A8C886C0--

    Read the article

  • How to convert my backup.cmd into something I can run in Linux?

    - by blade19899
    Back in the day when i was using windows(and a noob at everything IT) i liked batch scripting so much that i wrote a lot of them and one i am pretty proud of that is my backup.cmd(see below). I am pretty basic with the linux bash sudo/apt-get/sl/ls/locate/updatedb/etc... I don't really know the full power of the terminal. If you see the code below can i get it to work under (Ubuntu)linux :) by rewriting some of the windows code with the linux equivalent (btw:this works under xp/vista/7 | dutch/english) @echo off title back it up :home cls echo ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» echo º º echo º typ A/B for the options º echo º º echo ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ͹ echo º º echo º "A"=backup options º echo º º echo º "B"=HARDDISK Options º echo º º echo º º echo ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍŒ set /p selection=Choose: Goto %selection% :A cls echo ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» echo º º echo º typ 1 to start that backup º echo º º echo ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ͹ echo º º echo º "A"=backup options º echo º È1=Documents,Pictures,Music,Videos,Downloads º echo º º echo º "B"=HARDDISK Options º echo º º echo ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍŒ set /p selection=Choose: Goto %selection% :B cls echo ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» echo º º echo º typ HD to start the disk check º echo º º echo ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ͹ echo º º echo º "A"=backup options º echo º º echo º "B"=HARDDISK Options º echo º ÈHD=find and repair bad sectors º echo º º echo ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍŒ set /p selection=Choose: Goto %selection% :1 cls if exist "%userprofile%\desktop" (set desk=desktop) else (set desk=Bureaublad) if exist "%userprofile%\documents" (set docs=documents) else (set docs=mijn documenten) if exist "%userprofile%\pictures" (set pics=pictures) else (echo cant find %userprofile%\pictures) if exist "%userprofile%\music" (set mus=music) else (echo cant find %userprofile%\music) if exist "%userprofile%\Videos" (set vids=videos) else (echo cant find %userprofile%\videos) if exist "%userprofile%\Downloads" (set down=downloads) else (echo cant find %userprofile%\Downloads) cls echo. examples (D:\) (D:\Backup) (D:\Backup\18-4-2011) echo. echo. if there is no "D:\backup" folder then the folder will be created echo. set drive= set /p drive=storage: echo start>>backup.log echo Name:%username%>>backup.log echo Date:%date%>>backup.log echo Time:%time%>>backup.log echo ========================================%docs%===========================================>>backup.log echo %docs% echo Source:"%userprofile%\%docs%" echo Destination:"%drive%\%username%\%docs%" echo %time%>>backup.log xcopy "%userprofile%\%docs%" "%drive%\%username%\%docs%" /E /I>>Backup.log echo 20%% cls echo ========================================"%pics%"=========================================>>backup.log echo "%pics%" echo Source:"%userprofile%\%pics%" echo Destination:"%drive%\%username%\%pics%" echo %time%>>backup.log xcopy "%userprofile%\%pics%" "%drive%\%username%\%pics%" /E /I>>Backup.log echo 40%% cls echo ========================================"%mus%"=========================================>>backup.log echo "%mus%" echo Source:"%userprofile%\%mus%" echo Destination:"%drive%\%username%\%mus%" echo %time%>>backup.log xcopy "%userprofile%\%mus%" "%drive%\%username%\%mus%" /E /I>>Backup.log echo 60%% cls echo ========================================"%vids%"========================================>>backup.log echo %vids% echo Source:"%userprofile%\%vids%" echo Destination:"%drive%\%username%\%vids%" echo %time%>>backup.log xcopy "%userprofile%\%vids%" "%drive%\%username%\%vids%" /E /I>>Backup.log echo 80%% cls echo ========================================"%down%"========================================>>backup.log echo "%down%" echo Source:"%userprofile%\%down%" echo Destination:"%drive%\%username%\%down%" echo %time%>>backup.log xcopy "%userprofile%\%down%" "%drive%\%username%\%down%" /E /I>>Backup.log echo end>>backup.log echo %username% %date% %time%>>backup.log echo 100%% cls echo backup Compleet copy "backup.log" "%drive%\%username%" del "backup.log" pushd "%drive%\%username%" echo close backup.log to continue with backup script "backup.log" echo press any key to retun to the main menu pause>nul goto :home :HD echo finds and repairs bad sectors echo typ in harddisk letter (C: D: E:) set HD= set /p HD=Hard Disk: chkdsk %HD% /F /R /X pause goto :home

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • Why I Love the Social Management Platform I Use

    - by Mike Stiles
    Not long ago, I asked the product heads for the various components of the Oracle Social Cloud’s SRM to say what they thought was coolest about their component. And while they did a fine job, it was recently pointed out to me that no one around here uses the platform in a real-world setting more than I do, as I not only blog and podcast my brains out, I also run Oracle Social’s own social properties. Of course I’m pro-Oracle Social’s product. Duh. But if you can get around immediately writing this off as a puff piece, there are real reasons beyond my employment that the Oracle SRM works for me as a community manager. If it didn’t, I could have simply written about something else, like how people love smartphones or something genius like that. Post Grid I like seeing what I want to see. I’m difficult that way. Post grid lets me see all posts for all channels, with custom columns showing me how posts are doing. I can filter the grid by social channel, published, scheduled, draft, suggested, etc. Then there’s a pullout side panel that shows me post details, including engagement analytics. From the pullout, I can preview the post, do a quick edit, a full edit, or (my favorite) copy a post so I can edit it and schedule it for other times so I don’t have to repeat from scratch. I’m not lazy, just time conscious. The Post Creation Environment Given our post volume, I need this to be as easy as it can be. I can highlight which streams I want the post to go out on, edit for the individual streams, maintain a media library that’s easy to upload to and attach from, tag posts, insert links that auto-shorten to an orac.le shortlink, schedule with a nice calendar visual, geo-target, drop photos inline into Twitter, and review each post. Watching My Channels The Engage component of the Oracle SRM brings in and drops into a grid the activity that’s happening on all my channels. I keep this open round-the-clock. Again, I get to see only what I want; social network, stream, unread messages, engagement by how I labeled them, and date range. I can bring up a post with a click, reply, label it, retweet it, assign it, delete it, archive it, etc. So don’t bother trying to be a troll on my channels. Analytics Social publishing and engaging 24/7 would be pretty unrewarding if I couldn’t see how our audience was responding. Frankly, I get more analytics than I know what to do with (I’m a content creator, not a data analyst). But I do know what numbers I care about, and they’re available by channel, date range, and campaigns. I’m seeing fan count, sources and demographics. I’m seeing engagement, what kinds of posts are getting engagement, and top engagers. I’m seeing my reach, both organic and paid. I’m seeing how individual posts performed in terms of engagement and virality, and posting time/date insight. Have I covered all the value propositions? I’ve covered pathetically few of them. It would be impossible in blog length to give shout-outs to the vast number of features and functionalities. From organizing teams and managing permissions with Workflow to the powerful ability to monitor topics (and your competition) across the web in Listen, it’s a major, and increasingly necessary, weapon in your social marketing arsenal. The life of a Community Manager is not for everybody. So if the Oracle SRM can actually make a Community Manager’s life easier, what’s not to love? I invite you to take a look at and participate in our Oracle Social Cloud social channels! Facebook Twitter YouTube Google Plus LinkedIn Daily Podcast on iHeartRadio @mikestiles @oraclesocial Photo: freeimages.com

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Adding SQL Cache Dependencies to the Loosely coupled .NET Cache Provider

    - by Rhames
    This post adds SQL Cache Dependency support to the loosely coupled .NET Cache Provider that I described in the previous post (http://geekswithblogs.net/Rhames/archive/2012/09/11/loosely-coupled-.net-cache-provider-using-dependency-injection.aspx). The sample code is available on github at https://github.com/RobinHames/CacheProvider.git. Each time we want to apply a cache dependency to a call to fetch or cache a data item we need to supply an instance of the relevant dependency implementation. This suggests an Abstract Factory will be useful to create cache dependencies as needed. We can then use Dependency Injection to inject the factory into the relevant consumer. Castle Windsor provides a typed factory facility that will be utilised to implement the cache dependency abstract factory (see http://docs.castleproject.org/Windsor.Typed-Factory-Facility-interface-based-factories.ashx). Cache Dependency Interfaces First I created a set of cache dependency interfaces in the domain layer, which can be used to pass a cache dependency into the cache provider. ICacheDependency The ICacheDependency interface is simply an empty interface that is used as a parent for the specific cache dependency interfaces. This will allow us to place a generic constraint on the Cache Dependency Factory, and will give us a type that can be passed into the relevant Cache Provider methods. namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependency { } }   ISqlCacheDependency.cs The ISqlCacheDependency interface provides specific SQL caching details, such as a Sql Command or a database connection and table. It is the concrete implementation of this interface that will be created by the factory in passed into the Cache Provider. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ISqlCacheDependency : ICacheDependency { ISqlCacheDependency Initialise(string databaseConnectionName, string tableName); ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand); } } If we want other types of cache dependencies, such as by key or file, interfaces may be created to support these (the sample code includes an IKeyCacheDependency interface). Modifying ICacheProvider to accept Cache Dependencies Next I modified the exisitng ICacheProvider<T> interface so that cache dependencies may be passed into a Fetch method call. I did this by adding two overloads to the existing Fetch methods, which take an IEnumerable<ICacheDependency> parameter (the IEnumerable allows more than one cache dependency to be included). I also added a method to create cache dependencies. This means that the implementation of the Cache Provider will require a dependency on the Cache Dependency Factory. It is pretty much down to personal choice as to whether this approach is taken, or whether the Cache Dependency Factory is injected directly into the repository or other consumer of Cache Provider. I think, because the cache dependency cannot be used without the Cache Provider, placing the dependency on the factory into the Cache Provider implementation is cleaner. ICacheProvider.cs using System; using System.Collections.Generic;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheProvider<T> { T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   U CreateCacheDependency<U>() where U : ICacheDependency; } }   Cache Dependency Factory Next I created the interface for the Cache Dependency Factory in the domain layer. ICacheDependencyFactory.cs namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependencyFactory { T Create<T>() where T : ICacheDependency;   void Release<T>(T cacheDependency) where T : ICacheDependency; } }   I used the ICacheDependency parent interface as a generic constraint on the create and release methods in the factory interface. Now the interfaces are in place, I moved on to the concrete implementations. ISqlCacheDependency Concrete Implementation The concrete implementation of ISqlCacheDependency will need to provide an instance of System.Web.Caching.SqlCacheDependency to the Cache Provider implementation. Unfortunately this class is sealed, so I cannot simply inherit from this. Instead, I created an interface called IAspNetCacheDependency that will provide a Create method to create an instance of the relevant System.Web.Caching Cache Dependency type. This interface is specific to the ASP.NET implementation of the Cache Provider, so it should be defined in the same layer as the concrete implementation of the Cache Provider (the MVC UI layer in the sample code). IAspNetCacheDependency.cs using System.Web.Caching;   namespace CacheDiSample.CacheProviders { public interface IAspNetCacheDependency { CacheDependency CreateAspNetCacheDependency(); } }   Next, I created the concrete implementation of the ISqlCacheDependency interface. This class also implements the IAspNetCacheDependency interface. This concrete implementation also is defined in the same layer as the Cache Provider implementation. AspNetSqlCacheDependency.cs using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class AspNetSqlCacheDependency : ISqlCacheDependency, IAspNetCacheDependency { private string databaseConnectionName;   private string tableName;   private System.Data.SqlClient.SqlCommand sqlCommand;   #region ISqlCacheDependency Members   public ISqlCacheDependency Initialise(string databaseConnectionName, string tableName) { this.databaseConnectionName = databaseConnectionName; this.tableName = tableName; return this; }   public ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand) { this.sqlCommand = sqlCommand; return this; }   #endregion   #region IAspNetCacheDependency Members   public System.Web.Caching.CacheDependency CreateAspNetCacheDependency() { if (sqlCommand != null) return new SqlCacheDependency(sqlCommand); else return new SqlCacheDependency(databaseConnectionName, tableName); }   #endregion   } }   ICacheProvider Concrete Implementation The ICacheProvider interface is implemented by the CacheProvider class. This implementation is modified to include the changes to the ICacheProvider interface. First I needed to inject the Cache Dependency Factory into the Cache Provider: private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   Next I implemented the CreateCacheDependency method, which simply passes on the create request to the factory: public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   The signature of the FetchAndCache helper method was modified to take an additional IEnumerable<ICacheDependency> parameter:   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) and the following code added to create the relevant System.Web.Caching.CacheDependency object for any dependencies and pass them to the HttpContext Cache: CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add(((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   The full code listing for the modified CacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class CacheProvider<T> : ICacheProvider<T> { private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   #region Helper Methods   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { U value; if (!TryGetValue<U>(key, out value)) { value = retrieveData(); if (!absoluteExpiry.HasValue) absoluteExpiry = Cache.NoAbsoluteExpiration;   if (!relativeExpiry.HasValue) relativeExpiry = Cache.NoSlidingExpiration;   CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add( ((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   } return value; }   private bool TryGetValue<U>(string key, out U value) { object cachedValue = HttpContext.Current.Cache.Get(key); if (cachedValue == null) { value = default(U); return false; } else { try { value = (U)cachedValue; return true; } catch { value = default(U); return false; } } }   #endregion } }   Wiring up the DI Container Now the implementations for the Cache Dependency are in place, I wired them up in the existing Windsor CacheInstaller. First I needed to register the implementation of the ISqlCacheDependency interface: container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   Next I registered the Cache Dependency Factory. Notice that I have not implemented the ICacheDependencyFactory interface. Castle Windsor will do this for me by using the Type Factory Facility. I do need to bring the Castle.Facilities.TypedFacility namespace into scope: using Castle.Facilities.TypedFactory;   Then I registered the factory: container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); The full code for the CacheInstaller class is: using Castle.MicroKernel.Registration; using Castle.MicroKernel.SubSystems.Configuration; using Castle.Windsor; using Castle.Facilities.TypedFactory;   using CacheDiSample.Domain.CacheInterfaces; using CacheDiSample.CacheProviders;   namespace CacheDiSample.WindsorInstallers { public class CacheInstaller : IWindsorInstaller { public void Install(IWindsorContainer container, IConfigurationStore store) { container.Register( Component.For(typeof(ICacheProvider<>)) .ImplementedBy(typeof(CacheProvider<>)) .LifestyleTransient());   container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); } } }   Configuring the ASP.NET SQL Cache Dependency There are a couple of configuration steps required to enable SQL Cache Dependency for the application and database. From the Visual Studio Command Prompt, the following commands should be used to enable the Cache Polling of the relevant database tables: aspnet_regsql -S <servername> -E -d <databasename> –ed aspnet_regsql -S <servername> -E -d CacheSample –et –t <tablename>   (The –t option should be repeated for each table that is to be made available for cache dependencies). Finally the SQL Cache Polling needs to be enabled by adding the following configuration to the <system.web> section of web.config: <caching> <sqlCacheDependency pollTime="10000" enabled="true"> <databases> <add name="BloggingContext" connectionStringName="BloggingContext"/> </databases> </sqlCacheDependency> </caching>   (obviously the name and connection string name should be altered as required). Using a SQL Cache Dependency Now all the coding is complete. To specify a SQL Cache Dependency, I can modify my BlogRepositoryWithCaching decorator class (see the earlier post) as follows: public IList<Blog> GetAll() { var sqlCacheDependency = cacheProvider.CreateCacheDependency<ISqlCacheDependency>() .Initialise("BloggingContext", "Blogs");   ICacheDependency[] cacheDependencies = new ICacheDependency[] { sqlCacheDependency };   string key = string.Format("CacheDiSample.DataAccess.GetAll");   return cacheProvider.Fetch(key, () => { return parentBlogRepository.GetAll(); }, null, null, cacheDependencies) .ToList(); }   This will add a dependency of the “Blogs” table in the database. The data will remain in the cache until the contents of this table change, then the cache item will be invalidated, and the next call to the GetAll() repository method will be routed to the parent repository to refresh the data from the database.

    Read the article

  • sort django queryset by latest instance of a subset of related model

    - by rsp
    I have an Order model and order_event model. Each order_event has a foreignkey to order. so from an order instance i can get: myorder.order_event_set. I want to get a list of all orders but i want them to be sorted by the date of the last event. A statement like this works to sort by the latest event date: queryset = Order.objects.all().annotate(latest_event_date=Max('order_event__event_datetime')).order_by('latest_event_date') However, what I really need is a list of all orders sorted by latest date of A SUBSET OF EVENTS. For example my events are categorized into "scheduling", "processing", etc. So I should be able to get a list of all orders sorted by the latest scheduling event. This django doc (https://docs.djangoproject.com/en/dev/topics/db/aggregation/#filter-and-exclude) shows how I can get the latest schedule event using a filter but this excludes orders without a scheduling event. I thought I could combine the filtered queryset with a queryset that includes back those orders that are missing a scheduling event...but I'm not quite sure how to do this. I saw answers related to using python list but it would be much more useful to have a proper django queryset (ie for a view with pagination, etc.)

    Read the article

  • ZF2 Zend\Db Insert/Update Using Mysql Expression (Zend\Db\Sql\Expression?)

    - by Aleross
    Is there any way to include MySQL expressions like NOW() in the current build of ZF2 (2.0.0beta4) through Zend\Db and/or TableGateway insert()/update() statements? Here is a related post on the mailing list, though it hasn't been answered: http://zend-framework-community.634137.n4.nabble.com/Zend-Db-Expr-and-Transactions-in-ZF2-Db-td4472944.html It appears that ZF1 used something like: $data = array( 'update_time' => new \Zend\Db\Expr("NOW()") ); And after looking through Zend\Db\Sql\Expression I tried: $data = array( 'update_time' => new \Zend\Db\Sql\Expression("NOW()"), ); But get the following error: Catchable fatal error: Object of class Zend\Db\Sql\Expression could not be converted to string in /var/www/vendor/ZF2/library/Zend/Db/Adapter/Driver/Pdo/Statement.php on line 256 As recommended here: http://zend-framework-community.634137.n4.nabble.com/ZF2-beta3-Zend-Db-Sql-Insert-new-Expression-now-td4536197.html I'm currently just setting the date with PHP code, but I'd rather use the MySQL server as the single source of truth for date/times. $data = array( 'update_time' => date('Y-m-d H:i:s'), ); Thanks, I appreciate any input!

    Read the article

  • Silverlight DatePicker in DataGrid: Enter does not submit

    - by queen3
    I have DataGrid with DataGridTemplateColumn which has DatePicker as editing element: <data:DataGridTemplateColumn Header="Due date" CanUserSort="False" > <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding EndDateFormatted}" /> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <controls:DatePicker SelectedDate="{Binding EndDate, Mode=TwoWay}" /> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> The problem is that Enter key does not work at all when in textbox editing mode - just does nothing. Selecting date from dropdown panel works. Also, Tab does not keep value (reset to previous one), but with help of this I can fix it. But I don't know how to make Enter to accept value and preferably move to next cell. I also tried third-party date picker, no changes - same issues with Tab and Enter. Seems like a DataGrid issue. I use Silverlight 3.

    Read the article

  • NHibernate query with Projections.Cast to DateTime

    - by stiank81
    I'm experimenting with using a string for storing different kind of data types in a database. When I do queries I need to cast the strings to the right type in the query itself. I'm using .Net with NHibernate, and was glad to learn that there exists functionality for this. Consider the simple class: public class Foo { public string Text { get; set; } } I successfully use Projections.Cast to cast to numeric values, e.g. the following query correctly returns all Foos with an interger stored as int - between 1-10. var result = Session.CreateCriteria<Foo>() .Add(Restrictions.Between(Projections.Cast(NHibernateUtil.Int32, Projections.Property("Text")), 1, 10)) .List<Foo>(); Now if I try using this for DateTime I'm not able to make it work no matter what I try. Why?! var date = new DateTime(2010, 5, 21, 11, 30, 00); AddFooToDb(new Foo { Text = date.ToString() } ); // Will add it to the database... var result = Session .CreateCriteria<Foo>() .Add(Restrictions.Eq(Projections.Cast(NHibernateUtil.DateTime, Projections.Property("Text")), date)) .List<Foo>();

    Read the article

  • jqgrid not updating data on reload

    - by meepmeep
    I have a jqgrid with data loading from an xml stream (handled by django 1.1.1): jQuery(document).ready(function(){ jQuery("#list").jqGrid({ url:'/downtime/list_xml/', datatype: 'xml', mtype: 'GET', postData:{site:1,date_start:document.getElementById('datepicker_start').value,date_end:document.getElementById('datepicker_end').value}, colNames:[...], colModel :[...], pager: '#pager', rowNum: 25, rowList:[10,25,50], viewrecords: true, height: 500, caption: 'Click on column headers to reorder' }); $("#grid_reload").click(function(){ $("#list").trigger("reloadGrid"); }); $("#tabs").tabs(); $("#datepicker_start").datepicker({dateFormat: 'yy-mm-dd'}); $("#datepicker_end").datepicker({dateFormat: 'yy-mm-dd'}); ... And the html elements: <th>Start Date:</th> <td><input id="datepicker_start" type="text" value="2009-12-01"></input></td> <th>End Date:</th> <td><input id="datepicker_end" type="text" value="2009-12-03"></input></td> <td><input id="grid_reload" type="submit" value="load" /></td> When I click the grid_reload button, the grid reloads, but when it has done so it shows exactly the same data as before, even though the xml is tested to return different data for different timestamps. I have checked using alert(document.getElementById('datepicker_start').value) that the values in the date inputs are passed correctly when the reload event is triggered. Any ideas why the data doesn't update? A caching or browser issue perhaps?

    Read the article

  • ASPXAUTH cookie is not being saved.

    - by kripto_ash
    Hi, Im working on a web project in ASP .NET MVC 2. In this project we store some info inside an ecripted cookie (the ASPXAUTH cookie) to avoid the need to query the db for every request. The thing is the code for this part has suddenly stopped working. I reviewed the changes made to the code on the source control server for anything that could be causing it, I found nothing. I even reverted to a known working copy (working on some other persons PC, same code, etc) but after debugging, it seems the .ASPXAUTH cookie is not getting saved anymore. Instead the ASP.NET_SessionId cookie is being set... (wich before wasn't) I changed the web.config file to turn off the sessionState. This eliminated the ASP.NET_SessionId cookie from being set, but it is still not saving the auth cookie. Ive recently installed some Microsoft Windows XP Updates, but the other person (whos PC runs the application just fine) also did. After googling, some info i found pointed out to a problem with the expiration date of the cookie. Ether cus the pc didnt have the right time/date (this was not the case) and others cus of the cookie expiration date being wrongly set. (I checked and it is being set correctly)... The problem persists with other browsers besides the one im using (Chrome) i tried it with IE6. Any ideas on why this is happening? Ill continue to post any helpful information i can find. Thanks in advance.

    Read the article

  • Debugger ignores me.

    - by atch
    Having code: Date::Date(const char* day, const char* month, const char* year):is_leap__(false) { my_day_ = lexical_cast<int>(day); my_month_ = static_cast<Month>(lexical_cast<int>(month)); /*Have to check month here, other ctor assumes month is correct*/ if (my_month_ < 1 || my_month_ > 12) { throw std::exception("Incorrect month."); } my_year_ = lexical_cast<int>(year); if (!check_range_year_(my_year_)) { throw std::exception("Year out of range."); } if (check_leap_year_(my_year_))//SKIPS THIS LINE { is_leap__ = true; } if (!check_range_day_(my_day_, my_month_)) { throw std::exception("Day out of range."); } } bool Date::check_leap_year_(int year)const//IF I MARK THIS LINE WITH BREAKPOINT I'M GETTING MSG THAT THERE IS NO EXECUTABLE CODE IS ASSOSIATED WITH THIS LINE { if (!(year%400) || (year%100 && !(year%4))) { return true; } else { return false; } } Which is very strange in my opinion. There is call to this fnc in my code, why compiler ignores that. P.S. I'm trying to debug in release.

    Read the article

  • git pull not working

    - by dorelal
    I am not using github. We have git setup on our machine. I created a branch from master called experiment. However when I am trying to do git pull I am getting following message. > git pull You asked me to pull without telling me which branch you want to merge with, and 'branch.experiment.merge' in your configuration file does not tell me either. Please specify which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. Here is result of git remote show origin > git remote show origin * remote origin Fetch URL: ssh://git.domain.com/var/git/app.git Push URL: ssh://git.domain.com/var/git/app.git HEAD branch: master Remote branches: experiment tracked master tracked Local branches configured for 'git pull': master merges with remote master Local refs configured for 'git push': experiment pushes to experiment (local out of date) master pushes to master (up to date) As I read the message above experiment is mapped to origin/experiment. And my local repository knows that it is out of date. Then why I am not able to do git pull?

    Read the article

  • Unable to set variables in bash script in Mac OSX

    - by cohortq
    Hello! I am attempting to automate moving files from a folder to a new folder automatically every night using a bash script run from applescript on a schedule. I am attempting to write a bash script on Mac OSX, and it keeps failing. In short this is what I have (all my ECHOs are for error checking): !/bin/bash folder = "ABC" useracct = 'test' day = date "+%d" month = date "+%B" year = date "+%Y" folderToBeMoved = "/users/$useracct/Documents/Archive/Primetime.eyetv" newfoldername = "/Volumes/Media/Network/$folder/$month$day$year" ECHO "Network is $network" $network ECHO "day is $day" ECHO "Month is $month" ECHO "YEAR is $year" ECHO "source is $folderToBeMoved" ECHO "dest is $newfoldername" mkdir $newfoldername cp -R $folderToBeMoved $newfoldername if [-f $newfoldername/Primetime.eyetv]; then rm $folderToBeMoved; fi Now my first problem is that I cannot set variables at all. Even literal ones where I just make it equal some literal. All my echos come out blank. I cannot grab the day, month, or year either,it comes out blank as well. I get an error saying that -f is not found. I get an error saying there is an unexpected end of file. I made the file and did a chmod u+x scriptname.sh I'm not sure why nothing is working at all. I am very new to this bash script on OSX, and only have experience with windows vbscript. Any help would be great, thanks!

    Read the article

  • How to fill DataGridView from nested table oracle

    - by arkadiusz85
    I want to create my type: CREATE TYPE t_read AS OBJECT ( id_worker NUMBER(20), how_much NUMBER(5,2), adddate_r DATE, date_from DATE, date_to DATE ); I create a table of my type: CREATE TYPE t_tab_read AS TABLE OF t_read; Next step is create a table with my type: enter code hereCREATE TABLE Reading ( id_watermeter NUMBER(20) constraint Watermeter_fk1 references Watermeters(id_watermeter), read t_tab_read ) NESTED TABLE read STORE AS store_read ; Microsoft Visual Studio can not display this type in DataGridView. I use Oracle.Command: C# using Oracle.DataAccess; using Oracle.DataAccess.Client; private void button1_Click(object sender, EventArgs e) { try { //my working class to connect to database ConnectionClass.BeginConnection(); OracleDataAdapter tmp = new OracleDataAdapter(); tmp = ConnectionClass.ReadCommand(ReadClass.test()); DataSet dataset4 = new DataSet(); tmp.Fill(dataset4, "Read1"); dataGridView4.DataSource = dataset4.Tables["Read1"]; } catch (Exception o) { MessageBox.Show(o.Message); } public class ReadClass { public static OracleCommand test() { string sql = "select c.id_watermeter, a. from reading c , table (c.read) a where id_watermeter=1"; ConnectionClass.Command1= new OracleCommand(sql, ConnectionClass.Connection); ConnectionClass.Command1.CommandType = CommandType.Text; return ConnectionClass.Command1; } } I tray: string sql = "select r.id_watermeter, o.id_worker, o.how_much, o.adddate_r, o.date_from, o.date_to from reading r, table (r.read) o where r.id_watermeter=1" string sql = "select a.from reading c , Table (c.read) a where id_watermeter=1" string sql = "select a.id_worker, a.how_much, a.adddate_r, a.date_from, a.date_to from reading c , table (c.read) a where id_watermeter=1" string sql = "select c.id_watermeter, a. from reading c , table (c.read) a where id_watermeter=1" Error : Unsuported Oracle data type USERDEFINED encountered Sombady can help me how to fill DataGridView using data from nested table. I am using Oracle 10g XE

    Read the article

  • Problem with oracle stored procedure - parameters

    - by Nicole
    I have this stored procedure: CREATE OR REPLACE PROCEDURE "LIQUIDACION_OBTENER" ( p_Cuenta IN NUMBER, p_Fecha IN DATE, p_Detalle OUT LIQUIDACION.FILADETALLE%TYPE ) IS BEGIN SELECT FILADETALLE INTO p_Detalle FROM Liquidacion WHERE (FILACUENTA = p_Cuenta) AND (FILAFECHA = p_Fecha); END; / ...and my c# code: string liquidacion = string.Empty; OracleCommand command = new OracleCommand("Liquidacion_Obtener"); command.BindByName = true; command.Parameters.Add(new OracleParameter("p_Cuenta", OracleDbType.Int64)); command.Parameters["p_Cuenta"].Value = cuenta; command.Parameters.Add(new OracleParameter("p_Fecha", OracleDbType.Date)); command.Parameters["p_Fecha"].Value = fecha; command.Parameters.Add("p_Detalle", OracleDbType.Varchar2, ParameterDirection.Output); OracleConnectionHolder connection = null; connection = this.GetConnection(); command.Connection = connection.Connection; command.CommandTimeout = 30; command.CommandType = CommandType.StoredProcedure; OracleDataReader lector = command.ExecuteReader(); while (lector.Read()) { liquidacion += ((OracleString)command.Parameters["p_Detalle"].Value).Value; } the thing is that when I try to put a value into the parameter "Fecha" (that is a date) the code gives me this error (when the line command.ExecuteReader(); is executed) Oracle.DataAccess.Client.OracleException : ORA-06502: PL/SQL: numeric or value error ORA-06512: at "SYSTEM.LIQUIDACION_OBTENER", line 9 ORA-06512: at line 1 I tried with the datetime and was not the problem, I eve tried with no input parameters and just the output and still got the same error. Aparently the problem is with the output parameter. I already tried putting p_Detalle OUT VARCHAR2 instead of p_Detalle OUT LIQUIDACION.FILADETALLE%TYPE but it didn't work either I hope my post is understandable.. thanks!!!!!!!!!!

    Read the article

  • JQuery DatePicker - pop calendar dialog on 'OnSelect' of another datepicker

    - by sajanyamaha
    I have two JQuery datepickers to select 'StartDate' and 'EndDate'.My requirement is to automatically popup the end date calendar after startDate has been selected. I have tried all the steps in COMMENTED HERE. The code here succesfully triggers the focus of 'EndDate' calendar,but it displays for a second and hides off. $(document).ready(function() { //Runs when tab is loaded var dateFormat = "dd/mm/yy"; var today=new Date(); today.setMonth(today.getMonth()-1); $("#ctl00_ContentPlaceHolder1_datepicker1").datepicker({ minDate: 0, dateFormat:dateFormat, //maxDate: '+12M +31D', onSelect: function(dateText, inst){ var the_date = new Date($.datepicker.parseDate(dateFormat,dateText)); //var end=(the_date.getDate()+1) + '/' + (the_date.getMonth()+1) + '/' + the_date.getFullYear(); $("#ctl00_ContentPlaceHolder1_datepicker2").datepicker('option', 'minDate', the_date); // TRIED ALL THESE //document.getElementById('ui-datepicker-div').style.display = 'block'; //document.getElementById('ui-datepicker-div').style.left = '635.5px'; //$("#ctl00_ContentPlaceHolder1_datepicker2").datepicker("show"); //$("#ctl00_ContentPlaceHolder1_datepicker2").datepicker(); //$("#ctl00_ContentPlaceHolder1_datepicker2").focus(); //$('#foo').slideUp(300).delay(800).fadeIn(400); //$("#ctl00_ContentPlaceHolder1_datepicker2").trigger("focus"); $('#ctl00_ContentPlaceHolder1_datepicker2').focus(); } }); $("#ctl00_ContentPlaceHolder1_datepicker2").datepicker({ //maxDate: '+12M +31D', dateFormat:dateFormat, onSelect: function(dateText, inst){ } }); });

    Read the article

  • Django model operating on a queryset

    - by jmoz
    I'm new to Django and somewhat to Python as well. I'm trying to find the idiomatic way to loop over a queryset and set a variable on each model. Basically my model depends on a value from an api, and a model method must multiply one of it's attribs by this api value to get an up-to-date correct value. At the moment I am doing it in the view and it works, but I'm not sure it's the correct way to achieve what I want. I have to replicate this looping elsewhere. Is there a way I can encapsulate the looping logic into a queryset method so it can be used in multiple places? I have this atm (I am using django-rest-framework): class FooViewSet(viewsets.ModelViewSet): model = Foo serializer_class = FooSerializer bar = # some call to an api def get_queryset(self): # Dynamically set the bar variable on each instance! foos = Foo.objects.filter(baz__pk=1).order_by('date') for item in foos: item.needs_bar = self.bar return items I would think something like so would be better: def get_queryset(self): bar = # some call to an api # Dynamically set the bar variable on each instance! return Foo.objects.filter(baz__pk=1).order_by('date').set_bar(bar) I'm thinking the api hit should be in the controller and then injected to instances of the model, but I'm not sure how you do this. I've been looking around querysets and managers but still can't figure it out nor decided if it's the best method to achieve what I want. Can anyone suggest the correct way to model this with django? Thanks.

    Read the article

  • How to manage a One-To-One and a One-To-Many of same type as unidirectional mapping?

    - by user1652438
    I'm trying to implement a model for private messages between two or more users. That means I've got two Entities: User PrivateMessage The User model shouldn't be edited, so I'm trying to set up an unidirectional relationship: @Entity (name = "User") @Table (name = "user") public class User implements Serializable { @Id String username; String password; // something like this ... } The PrivateMessage Model addresses multiple receivers and has exactly one sender. So I need something like this: @Entity (name = "PrivateMessage") @Table (name = "privateMessage") @XmlRootElement @XmlType (propOrder = {"id", "sender", "receivers", "title", "text", "date", "read"}) public class PrivateMessage implements Serializable { private static final long serialVersionUID = -9126766942868177246L; @Id @GeneratedValue private Long id; @NotNull private String title; @NotNull private String text; @NotNull @Temporal(TemporalType.TIMESTAMP) private Date date; @NotNull private boolean read; @NotNull @ElementCollection(fetch = FetchType.EAGER, targetClass = User.class) private Set<User> receivers; @NotNull @OneToOne private User sender; // and so on } The according 'privateMessage' table won't be generated and just the relationship between the PM and the many receivers is satisfied. I'm confused about this. Everytime I try to set a 'mappedBy' attribute, my IDE marks it as an error. It seems to be a problem that the User-entity isn't aware of the private message which maps it. What am I doing wrong here? I've solved some situation similar to this one, but none of those solutions will work here. Thanks in advance!

    Read the article

  • Oracle sqlldr: column not allowed here

    - by Wade Williams
    Can anyone spot the error in this attempted data load? The '\\N' is because this is an import of an OUTFILE dump from mysql, which puts \N for NULL fields. The decode is to catch cases where the field might be an empty string, or might have \N. Using Oracle 10g on Linux. load data infile objects.txt discardfile objects.dsc truncate into table objects fields terminated by x'1F' optionally enclosed by '"' (ID INTEGER EXTERNAL NULLIF (ID='\\N'), TITLE CHAR(128) NULLIF (TITLE='\\N'), PRIORITY CHAR(16) "decode(:PRIORITY, BLANKS, NULL, '\\N', NULL)", STATUS CHAR(64) "decode(:STATUS, BLANKS, NULL, '\\N', NULL)", ORIG_DATE DATE "YYYY-MM-DD HH:MM:SS" NULLIF (ORIG_DATE='\\N'), LASTMOD DATE "YYYY-MM-DD HH:MM:SS" NULLIF (LASTMOD='\\N'), SUBMITTER CHAR(128) NULLIF (SUBMITTER='\\N'), DEVELOPER CHAR(128) NULLIF (DEVELOPER='\\N'), ARCHIVE CHAR(4000) NULLIF (ARCHIVE='\\N'), SEVERITY CHAR(64) "decode(:SEVERITY, BLANKS, NULL, '\\N', NULL)", VALUED CHAR(4000) NULLIF (VALUED='\\N'), SRD DATE "YYYY-MM-DD" NULLIF (SRD='\\N'), TAG CHAR(64) NULLIF (TAG='\\N') ) Sample Data (record 1). The ^_ represents the unprintable 0x1F delimiter. 1987^_Component 1987^_\N^_Done^_2002-10-16 01:51:44^_2002-10-16 01:51:44^_import^_badger^_N^_^_N^_0000-00-00^_none Error: Record 1: Rejected - Error on table objects, column SEVERITY. ORA-00984: column not allowed here

    Read the article

  • Reading,Writing, Editing BLOBS through DataTables and DataRows

    - by Soham
    Consider this piece of code: DataSet ds = new DataSet(); SQLiteDataAdapter Da = new SQLiteDataAdapter(Command); Da.Fill(ds); DataTable dt = ds.Tables[0]; bool PositionExists; if (dt.Rows.Count > 0) { PositionExists = true; } else { PositionExists = false; } if (PositionExists) { //dt.Rows[0].Field<>("Date") } Here the "Date" field is a BLOB. My question is, a. Will reading through the DataAdapter create any problems later on, when I am working with BLOBS? More, specifically, will it read the BLOB properly? b. This was the read part.Now when I am writing the BLOB to the DB,its a queue actually. I.e I am trying to store a queue in MySQLite using a BLOB. Will Conn.ExecuteNonQuery() serve my purpose? c. When I am reading the BLOB back from the DB, can I edit it as the original datatype, it used to be in C# environment i.e { Queue - BLOB - ? } {C# -MySQL - C# } So in this context, Date field was a queue, I wrote it back as a BLOB, when reading it back, can I access it(and edit) as a queue? Thank You. Soham

    Read the article

  • How to ask BeanUtils to ignore null values

    - by Calm Storm
    Using Commons beanUtils I would like to know how to ask any converter say the Dateconverter to ignore null values and use null as default. As an example consider a public class, public class X { private Date date1; private String string1; //add public getters and setters } and my convertertest as, public class Apache { @Test public void testSimple() throws Exception { X x1 = new X(), x2 = new X(); x1.setString1("X"); x1.setDate1(null); org.apache.commons.beanutils.BeanUtils.copyProperties(x2, x1); //throws ConversionException System.out.println(x2.getString1()); System.out.println(x2.getDate1()); } } The above throws a NPE since the date happens to be null. This looks a very primitive scenario to me which should be handled by default (as in, I would expect x2 to have null value for date1). The doco tells me that I can ask the converter to do this. Can someone point me as to the best way for doing this ? I dont want to get hold of the Converter and isUseDefault() to be true because then I have to do it for all Date, Enum and many other converters !

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >