Search Results

Search found 3184 results on 128 pages for 'trace'.

Page 114/128 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Web Platform Installer bundles for Visual Studio 2010 SP1 - and how you can build your own WebPI bundles

    - by Jon Galloway
    Visual Studio SP1 is  now available via the Web Platform Installer, which means you've got three options: Download the 1.5 GB ISO image Run the 750KB Web Installer (which figures out what you need to download) Install via Web PI Note: I covered some tips for installing VS2010 SP1 last week - including some that apply to all of these, such as removing options you don't use prior to installing the service pack to decrease the installation time and download size. Two Visual Studio 2010 SP1 Web PI packages There are actually two WebPI packages for VS2010 SP1. There's the standard Visual Studio 2010 SP1 package [Web PI link], which includes (quoting ScottGu's post): VS2010 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 The notes on that package sum it up pretty well: Looking for the latest everything? Look no further. This will get you Visual Studio 2010 Service Pack 1 and the RTM releases of ASP.NET MVC 3, IIS 7.5 Express, SQL Server Compact 4.0 with tooling, and Web Deploy 2.0. It's the value meal of Microsoft products. Tell your friends! Note: This bundle includes the Visual Studio 2010 SP1 web installer, which will dynamically determine the appropriate service pack components to download and install. This is typically in the range of 200-500 MB and will take 30-60 minutes to install, depending on your machine configuration. There is also a Visual Studio 2010 SP1 Core package [Web PI link], which only includes only the SP without any of the other goodies (MVC3, IIS Express, etc.). If you're doing any web development, I'd highly recommend the main pack since it the other installs are small, simple installs, but if you're working in another space, you might want the core package. Installing via the Web Platform Installer I generally like to go with the Web PI when possible since it simplifies most software installations due to things like: Smart dependency management - installing apps or tools which have software dependencies will automatically figure out which dependencies you don't have and add them to the list (which you can review before install) Simultaneous download and install - if your install includes more than one package, it will automatically pull the dependencies first and begin installing them while downloading the others Lists the latest downloads - no need to search around, as they're all listed based on a live feed Includes open source applications - a lot of popular open source applications are included as well as Microsoft software and tools No worries about reinstallation - WebPI installations detect what you've got installed, so for instance if you've got MVC 3 installed you don't need to worry about the VS2010 SP1 package install messing anything up In addition to the links I included above, you can install the WebPI from http://www.microsoft.com/web/downloads/platform.aspx, and if you have Web PI installed you can just tap the Windows key and type "Web Platform" to bring it up in the Start search list. You'll see Visual Studio SP1 listed in the spotlight list as shown below. That's the standard package, which includes MVC 3 / IIS 7.5 Express / SQL Compact / Web Deploy. If you just want the core install, you can use the search box in the upper right corner, typing in "Visual Studio SP1" as shown. Core Install: Use Web PI or the Visual Studio Web Installer? I think the big advantage of using Web PI to install VS 2010 SP1 is that it includes the other new bits. If you're going to install the SP1 core, I don't think there's as much advantage to using Web PI, as the Web PI Core install just downloads the Visual Studio Web Installer anyways. I think Web PI makes it a little easier to find the download, but not a lot. The Visual Studio Web Installer checks dependencies, so there's no big advantage there. If you do happen to hit any problems installing Visual Studio SP1 via Web PI, I'd recommend running the Visual Studio Web Installer, then running the Web PI VS 2010 SP1 package to get all the other goodies. I talked to one person who hit some random snag, recommended that, and it worked out. Custom Web Platform Installer bundles You can create links that will launch the Web Platform Installer with a custom list of tools. You can see an example of this by clicking through on the install button at http://asp.net/downloads (cancelling the installation dialog). You'll see this in the address bar: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;ASPNET;NETFramework4;SQLExpress;VWD Notice that the appid querystring parameter includes a semicolon delimited list, and you can make your own custom Web PI links with your own desired app list. I can think of a lot of cases where that would be handy: linking to a recommended software configuration from a software project or product, setting up a recommended / documented / supported install list for a software development team or IT shop, etc. For instance, here's a link that installs just VS2010 SP1 Core and the SQL CE tools: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=VS2010SP1Core;SQLCETools Note: If you've already got all or some of the products installed, the display will reflect that. On my dev box which has the full SP1 package, here's what the above link gives me: Here's another example - on a fresh box I created a link to install MVC 3 and the Web Farm Framework (http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;WebFarmFramework) and got the following items added to the cart: But where do I get the App ID's? Aha, that's the trick. You can link to a list of cool packages, but you need to know the App ID's to link to them. To figure that out, I turned on tracing in Web Platform Installer  (also handy if you're ever having trouble with a WebPI install) and from the trace logs saw that the list of packages is pulled from an XML file: DownloadManager Information: 0 : Loading product xml from: https://go.microsoft.com/?linkid=9763242 DownloadManager Verbose: 0 : Connecting to https://go.microsoft.com/?linkid=9763242 with (partial) headers: Referer: wpi://2.1.0.0/Microsoft Windows NT 6.1.7601 Service Pack 1 If-Modified-Since: Wed, 09 Mar 2011 14:15:27 GMT User-Agent:Platform-Installer/3.0.3.0(Microsoft Windows NT 6.1.7601 Service Pack 1) DownloadManager Information: 0 : https://go.microsoft.com/?linkid=9763242 responded with 302 DownloadManager Information: 0 : Response headers: HTTP/1.1 302 Found Cache-Control: private Content-Length: 175 Content-Type: text/html; charset=utf-8 Expires: Wed, 09 Mar 2011 22:52:28 GMT Location: https://www.microsoft.com/web/webpi/3.0/webproductlist.xml Server: Microsoft-IIS/7.5 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Wed, 09 Mar 2011 22:53:27 GMT Browsing to https://www.microsoft.com/web/webpi/3.0/webproductlist.xml shows the full list. You can search through that in your browser / text editor if you'd like, open it in Excel as an XML table, etc. Here's a list of the App ID's as of today: SMO SMO32 PHP52ForIISExpress PHP53ForIISExpress StaticContent DefaultDocument DirectoryBrowse HTTPErrors HTTPRedirection ASPNET NETExtensibility ASP CGI ISAPIExtensions ISAPIFilters ServerSideIncludes HTTPLogging LoggingTools RequestMonitor Tracing CustomLogging ODBCLogging BasicAuthentication WindowsAuthentication DigestAuthentication ClientCertificateMappingAuthentication IISClientCertificateMappingAuthentication URLAuthorization RequestFiltering IPSecurity StaticContentCompression DynamicContentCompression IISManagementConsole IISManagementScriptsAndTools ManagementService MetabaseAndIIS6Compatibility WASProcessModel WASNetFxEnvironment WASConfigurationAPI IIS6WPICompatibility IIS6ScriptingTools IIS6ManagementConsole LegacyFTPServer FTPServer WebDAV LegacyFTPManagementConsole FTPExtensibility AdminPack AdvancedLogging WebFarmFrameworkNonLoc ExternalCacheNonLoc WebFarmFramework WebFarmFrameworkv2 WebFarmFrameworkv2_beta ExternalCache ECacheUpdate ARRv1 ARRv2Beta1 ARRv2Beta2 ARRv2RC ARRv2NonLoc ARRv2 ARRv2Update MVC MVCBeta MVCRC1 MVCRC2 DBManager DbManagerUpdate DynamicIPRestrictions DynamicIPRestrictionsUpdate DynamicIPRestrictionsLegacy DynamicIPRestrictionsBeta2 FTPOOB IISPowershellSnapin RemoteManager SEOToolkit VS2008RTM MySQL SQLDriverPHP52IIS SQLDriverPHP53IIS SQLDriverPHP52IISExpress SQLDriverPHP53IISExpress SQLExpress SQLManagementStudio SQLExpressAdv SQLExpressTools UrlRewrite UrlRewrite2 UrlRewrite2NonLoc UrlRewrite2RC UrlRewrite2Beta UrlRewrite10 UrlScan MVC3Installer MVC3 MVC3LocInstaller MVC3Loc MVC2 VWD VWD2010SP1Pack NETFramework4 WebMatrix WebMatrix_v1Refresh IISExpress IISExpress_v1 IIS7 AspWebPagesVS AspWebPagesVS_1_0 Plan9 Plan9Loc WebMatrix_WHP SQLCE SQLCETools SQLCEVSTools SQLCEVSTools_4_0 SQLCEVSToolsInstaller_4_0 SQLCEVSToolsInstallerNew_4_0 SQLCEVSToolsInstallerRepair_EN_4_0 SQLCEVSToolsInstallerRepair_JA_4_0 SQLCEVSToolsInstallerRepair_FR_4_0 SQLCEVSToolsInstallerRepair_DE_4_0 SQLCEVSToolsInstallerRepair_ES_4_0 SQLCEVSToolsInstallerRepair_IT_4_0 SQLCEVSToolsInstallerRepair_RU_4_0 SQLCEVSToolsInstallerRepair_KO_4_0 SQLCEVSToolsInstallerRepair_ZH_CN_4_0 SQLCEVSToolsInstallerRepair_ZH_TW_4_0 VWD2008 WebDAVOOB WDeploy WDeploy_v2 WDeployNoSMO WDeploy11 WinCache52 WinCache53 NETFramework35 WindowsImagingComponent VC9Redist NETFramework20SP2 WindowsInstaller31 PowerShell PowerShellMsu PowerShell2 WindowsInstaller45 FastCGIUpdate FastCGIBackport FastCGIIIS6 IIS51 IIS60 SQLNativeClient SQLNativeClient2008 SQLNativeClient2005 SQLCLRTypes SQLCLRTypes32 SMO_10_1 MySQLConnector PHP52 PHP53 PHPManager VSVWD2010Feature VWD2010WebFeature_0 VWD2010WebFeature_1 VWD2010WebFeature_2 VS2010SP1Prerequisite RIAServicesToolkitMay2010 Silverlight4Toolkit Silverlight4Tools VSLS SSMAMySQL WebsitePanel VS2010SP1Core VS2010SP1Installer VS2010SP1Pack MissingVWDOrVSVWD2010Feature VB2010Beta2Express VCS2010Beta2Express VC2010Beta2Express RIAServicesToolkitApr2010 VS2010Beta1 VS2010RC VS2010Beta2 VS2010Beta2Express VS2k8RTM VSCPP2k8RTM VSVB2k8RTM VSCS2k8RTM VSVWDFeature LegacyWinCache SQLExpress2005 SSMS2005

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on this blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • New OFM versions released SOA Suite 11.1.1.4 &amp; BPM 11.1.1.4 &amp; JDeveloper 11.1.1.4 WebLogic on JRockit 10.3.4 feedback from the community

    - by Jürgen Kress
    Oracle SOA Suite 11g Installations This is the latest release of the Oracle SOA Suite 11g. Please see the Documentation tab for Release Notes, Installation Guides and other release specific information. Please also see the List of New Features and Samples provided for this release. Release 11gR1 (11.1.1.4.0) Microsoft Windows (32-bit JVM) Linux (32-bit JVM) Generic Oracle JDeveloper 11g Rel 1 (11.1.1.x) (JDeveloper + ADF) Integrated development environment certified on Windows, Linux, and Macintosh. License is free (read the Pricing FAQ). Studio Edition for Windows (1.2 GB) | Studio Edition for Linux (1.3 GB) | See All See Additional Development Tools Oracle WebLogic Server 11g Rel 1 (10.3.4) Installers The WebLogic Server installers include Oracle Coherence and Oracle Enterprise Pack for Eclipse and supports development with other Fusion Middleware products . The zip includes WebLogic Server only and is intended for WebLogic Server development only. Linux x86 (1.1 GB) | Windows x86 (1 GB) Zip for Windows x86, Linux x86, Mac OS X (316 MB) | See All Oracle WebLogic Server 11gR1 (10.3.4) on JRockit Virtual Edition Download For additional downloads please visit the Oracle Fusion Middleware Products Update Center Share your feedback with the @soacommunity on twitter SOASimone Simone Geib SOA Suite 11gR1 (11.1.1.4.0) has just been released: http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html gschmutz gschmutz My new blog post: WebLogic Server, JDev, SOA, BPM, OSB and CEP 11.1.1.4 (PS3) available! - http://tinyurl.com/4negnpn simon_haslam Simon Haslam I'm very pleased to see WLS 10.3.4 for JRockit VE launched at the same time as the rest of PS3 http://j.mp/gl1nQm (32bit anyway) lucasjellema Lucas Jellema See http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/156082.xml for PS3 extension downloads BPM, SOA Editor, WebCenter demed demed List of new features in @OracleSOA 11gR1 PS3: http://bit.ly/fVRwsP is not extremely long but huge release by # of bugs fixed. Go! biemond Edwin Biemond WebLogic 10.3.4 new features http://bit.ly/f7L1Eu Exalogic Elastic Cloud , JPA2 , Maven plugin, OWSM policies on WebLogic SCA applications JDeveloper JDeveloper & ADF JDeveloper and Oracle ADF 11g Release 1 Patch Set 3 (11.1.1.4.0): New Features and Bug Fixes http://bit.ly/feghnY simon_haslam Simon Haslam WebLogic Server 10.3.4 (i.e. 11gR1 PS3) available now too http://bit.ly/eeysZ2 JDeveloper JDeveloper & ADF Share your impressions on the new JDeveloper 11g Patchset 3 release that came out today! Download it here: http://bit.ly/dogRN8 VikasAatOracle Vikas Anand SOA Suite 11gR1PS3 is Hotpluggable ...see list of features that @Demed posted..#soa #soacommunity   New versions of Oracle Fusion Middleware 11g R1 (11.1.1.4.x)  include: Oracle WebLogic Server 11g R1 (10.3.4) Oracle SOA Suite 11g R1 (11.1.1.4.0) Oracle Business Process Management 11g R1 (11.1.1.4.0) Oracle Complex Event Processing 11g R1 (11.1.1.4.0) Oracle Application Integration Architecture Foundation Pack 11g R1 (11.1.1.4.0) Oracle Service Bus 11g R1 (11.1.1.4.0) Oracle Enterprise Repository 11g R1 (11.1.1.4.0) Oracle Identity Management 11g R1 (11.1.1.4.0) Oracle Enterprise Content Management 11g R1 (11.1.1.4.0) Oracle WebCenter 11g R1 (11.1.1.4.0) - coming soon Oracle Forms, Reports, Portal & Discoverer 11g R1 (11.1.1.4.0) Oracle Repository Creation Utility 11g R1 (11.1.1.4.0) Oracle JDeveloper & Application Development Runtime 11g R1 (11.1.1.4.0) Resources Download  (OTN) Certification Documentation   New Features in Oracle SOA Suite 11g Release 1 (11.1.1.4.0) Updated: January, 2011 Go to Oracle SOA Suite 11g Doc Introduction Oracle SOA Suite 11gR1 (11.1.1.4.0) includes both bug fixes as well as new features listed below - click on the title of each feature for more details. Downloads, documentation links and more information on the Oracle SOA Suite available on the SOA Suite OTN page and as always, we welcome your feedback on the SOA OTN forum. New in Oracle SOA Suite in this release BPEL Component BPEL 2.0 support in JDeveloper The BPEL editor in JDeveloper now generates BPEL 2.0 code and introduces several new activities. Augmented XML variables auto-initialization capabilities The XML variable auto-initialization capabilities have been enhanced to support two need additional use cases: to initialize the to-spec node if it doesn't exist during the rule and to initialize array elements. New Assign Activity dialog The new Assign Activity supports the same drag & drop paradigm used for the XSLT mapper, greatly streamlining the task of assigning multiple variables. Mediator Component Time window parameter for the resequencer This new parameter lets users initiate a best-effort resequencing based on a time window rather than a number of messages. Support for attachments in the Mediator assign dialog The Mediator assign dialog now supports attachment, enabling usage of the Mediator to transmit attachments even if source and target schemas are different. Adapters & Bindings ChunkSize property added to the File Adapter header properties The ChunkSize property of the File Adapter is now available as a header property, allowing in-process modification of the value for this property. Improved support for distributed WLS JMS topics though automatic rebalancing of listeners The JMS Adapter has been enhanced to subscribe to administrative events from WLS JMS. Based on these events, it dynamically rebalances listeners when there are changes to the members of a local or remote WLS JMS distributed destination. JDeveloper configuration wizard for custom JCA adapters A new wizard is available in JDeveloper to configure custom-built adapters Administration & Enterprise Manager Enhanced purging capabilities to manage database growth Historical instance data can now be purged using three different strategies: batch script, scheduled batch script or data partitioning. Asynchronous bulk instance deletion in Enterprise Manager Bulk deletion of instances in Enterprise Manager now executes as an asynchronous operation in Enterprise Manager, returning control to the user as soon as the action has been submitted and acknowledged. B2B Ability to schedule partner downtime This feature allows trading partners to notify each other about planned downtime and to delay delivery of messages during that period. Message sequencing B2B now supports both inbound and outbound message sequencing. Simplified BAM integration with B2B B2B ships with various pre-configured artifacts to simplify monitoring in BAM. Instance Message Java API for B2B The new instance message Java API supports programmatic access to B2B instance message data. Oracle Service Bus (OSB) Certification of the File and FTP JCA Adapters The File and FTP JCA adapters are now certified for use with Oracle Service Bus (in addition to the native transports). Security enhancements Oracle Service Bus now supports SAML 2.0 as well as the OWSM authorization policies. Check the Oracle Service Bus 11.1.1.4 Release Notes for a complete list of new features. Installation, Hot-Pluggability & Certifications Ability to run Oracle SOA Suite on IBM WebSphere Application Server Oracle SOA Suite can now be deployed on IBM WebSphere Application Server Network Deployment (ND) 7.0.11 and IBM WebSphere Application Server 7.0.11. Single JVM developer installation template Oracle SOA Suite can now be targeted to the WebLogic admin server - there is no requirement to also have a managed server. This topology is intended to minimize the memory foorprint of development environments. This is in addition to the list of supported browsers, operating systems and databases already certified in prior releases. Complex Event Processing (CEP) IDE enhancements This release introduces several enhancements to the development IDE, such as adapter wizards and event-type repository. CQL enhancements CQL enhancements include JDBC data cartridges and parametrized queries. Tracing and injecting events in the Event Processing Network (EPN) In the development environment you can now trace and inject events. Check the Oracle CEP 11.1.1.4 Release Notes for a complete list of new features. SOA Suite page on OTN For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Suite 11.1.1.4,JDeveloper 11.1.1.4,WebLogic 10.3.4,JRockit 10.3.4,SOA Community,Oracle,OPN,SOA,Simone Geib,Guido Schmutz,Edwin Biemond,Lucas Jellema,Simon Haslam,Demed,Vikas Anand,Jürgen Kress

    Read the article

  • Converting a generic list into JSON string and then handling it in java script

    - by Jalpesh P. Vadgama
    We all know that JSON (JavaScript Object Notification) is very useful in case of manipulating string on client side with java script and its performance is very good over browsers so let’s create a simple example where convert a Generic List then we will convert this list into JSON string and then we will call this web service from java script and will handle in java script. To do this we need a info class(Type) and for that class we are going to create generic list. Here is code for that I have created simple class with two properties UserId and UserName public class UserInfo { public int UserId { get; set; } public string UserName { get; set; } } Now Let’s create a web service and web method will create a class and then we will convert this with in JSON string with JavaScriptSerializer class. Here is web service class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; namespace Experiment.WebService { /// <summary> /// Summary description for WsApplicationUser /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class WsApplicationUser : System.Web.Services.WebService { [WebMethod] public string GetUserList() { List<UserInfo> userList = new List<UserInfo>(); for (int i = 1; i <= 5; i++) { UserInfo userInfo = new UserInfo(); userInfo.UserId = i; userInfo.UserName = string.Format("{0}{1}", "J", i.ToString()); userList.Add(userInfo); } System.Web.Script.Serialization.JavaScriptSerializer jSearializer = new System.Web.Script.Serialization.JavaScriptSerializer(); return jSearializer.Serialize(userList); } } } Note: Here you must have this attribute here in web service class ‘[System.Web.Script.Services.ScriptService]’ as this attribute will enable web service to call from client side. Now we have created a web service class let’s create a java script function ‘GetUserList’ which will call web service from JavaScript like following function GetUserList() { Experiment.WebService.WsApplicationUser.GetUserList(ReuqestCompleteCallback, RequestFailedCallback); } After as you can see we have inserted two call back function ReuqestCompleteCallback and RequestFailedCallback which handle errors and result from web service. ReuqestCompleteCallback will handle result of web service and if and error comes then RequestFailedCallback will print the error. Following is code for both function. function ReuqestCompleteCallback(result) { result = eval(result); var divResult = document.getElementById("divUserList"); CreateUserListTable(result); } function RequestFailedCallback(error) { var stackTrace = error.get_stackTrace(); var message = error.get_message(); var statusCode = error.get_statusCode(); var exceptionType = error.get_exceptionType(); var timedout = error.get_timedOut(); // Display the error. var divResult = document.getElementById("divUserList"); divResult.innerHTML = "Stack Trace: " + stackTrace + "<br/>" + "Service Error: " + message + "<br/>" + "Status Code: " + statusCode + "<br/>" + "Exception Type: " + exceptionType + "<br/>" + "Timedout: " + timedout; } Here in above there is a function called you can see that we have use ‘eval’ function which parse string in enumerable form. Then we are calling a function call ‘CreateUserListTable’ which will create a table string and paste string in the a div. Here is code for that function. function CreateUserListTable(userList) { var tablestring = '<table ><tr><td>UsreID</td><td>UserName</td></tr>'; for (var i = 0, len = userList.length; i < len; ++i) { tablestring=tablestring + "<tr>"; tablestring=tablestring + "<td>" + userList[i].UserId + "</td>"; tablestring=tablestring + "<td>" + userList[i].UserName + "</td>"; tablestring=tablestring + "</tr>"; } tablestring = tablestring + "</table>"; var divResult = document.getElementById("divUserList"); divResult.innerHTML = tablestring; } Now let’s create div which will have all html that is generated from this function. Here is code of my web page. We also need to add a script reference to enable web service from client side. Here is all HTML code we have. <form id="form1" runat="server"> <asp:ScriptManager ID="myScirptManger" runat="Server"> <Services> <asp:ServiceReference Path="~/WebService/WsApplicationUser.asmx" /> </Services> </asp:ScriptManager> <div id="divUserList"> </div> </form> Now as we have not defined where we are going to call ‘GetUserList’ function so let’s call this function on windows onload event of javascript like following. window.onload=GetUserList(); That’s it. Now let’s run it on browser to see whether it’s work or not and here is the output in browser as expected. That’s it. This was very basic example but you can crate your own JavaScript enabled grid from this and you can see possibilities are unlimited here. Stay tuned for more.. Happy programming.. Technorati Tags: JSON,Javascript,ASP.NET,WebService

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • At most how many customized P3 attributes could be added into Agile?

    - by Jie Chen
    I have one customer/Oracle Partner Consultant asking me such question: how many customized attributes can be allowed to add to Agile's subclass Page Three? I never did research against this because Agile User Guide never says this and theoretically Agile supports unlimited amount of customized attributes, unless the browser itself cannot handle them in allocated memory. However my customers says when to add almost 1000 attributes, the browser (Web Client) will not show any Page Three attributes, including all the out-of-box attributes. Let's see why. Analysis It is horrible to add 1000 attributes manually. Let's do it by a batch SQL like below to add them to Item's subclass Page Three tab. Do not execute below SQL because it will not take effect due to your different node id. CREATE OR REPLACE PROCEDURE createP3Text(v_name IN VARCHAR2) IS v_nid NUMBER; v_pid NUMBER; BEGIN select SEQNODETABLE.nextval into v_nid from dual; Insert Into nodeTable ( id,parentID,description,objType,inherit,helpID,version,name ) values ( v_nid,2473003, v_name ,1,0,0,0, v_name); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,925, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,0,0,0,0,1,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,0,0,0,0,2,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,2,0,1,3,'50'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,5, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,2,0,1,6,'50'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,2,0,0,7,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,8,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,9,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,1,0,1,10,v_name); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,0,0,0,0,11,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,11743,1,14,'2'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,30, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,2,1,0,1,38, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,59,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,60,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,724,0,61, null); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,1,0,0,232,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,233,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,12239,1,415,'13307'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,2,1,0,0,605,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,610,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,1,4,1,451,0,716,'1'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,795,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,2000008821,1,864,'2'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,1,923,'0'); Insert Into propertyTable ( ID,parentID,readOnly,attType,dataType,selection,visible,propertyID,value ) values ( SEQPROPERTYTABLE.nextval,v_nid,0,4,1,451,0,719,'0'); Insert Into tableInfo ( tabID,tableID,classID,att,ordering ) values ( 2473005,1501,2473002,v_nid,9999); commit; END createP3Text; / BEGIN FOR i in 1..1000 LOOP createP3Text('MyText' || i); END LOOP; END; / DROP PROCEDURE createP3Text; COMMIT; Now restart Agile Server and check the Server's log, we noticed below: ***** Node Created : 85625 ***** Property Created : 184579 +++++++++++++++++++++++++++++++++++++ + Agile PLM Server Starting Up... + +++++++++++++++++++++++++++++++++++++ However the previously log before batch SQL is ***** Node Created : 84625 ***** Property Created : 157579 +++++++++++++++++++++++++++++++++++++ + Agile PLM Server Starting Up... + +++++++++++++++++++++++++++++++++++++ Obviously we successfully imported 1000 (85625-84625) attributes. Now go to JavaClient and confirm if we have them or not. Theoretically we are able to open such item object and see all these 1000 attributes and their values, but we get below error. We have no error tips in server log. But never mind we have the Java Console for JavaClient. If to open the same item in JavaClient we get a clear error and detailed trace in Java Console. ORA-01795: maximum number of expressions in a list is 1000 java.sql.SQLException: ORA-01795: maximum number of expressions in a list is 1000 at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125) ... ... at weblogic.jdbc.wrapper.PreparedStatement.executeQuery(PreparedStatement.java:128) at com.agile.pc.cmserver.base.AgileFlexUtil.setFlexValuesForOneRowTable(AgileFlexUtil.java:1104) at com.agile.pc.cmserver.base.BaseFlexTableDAO.loadExtraFlexAttValues(BaseFlexTableDAO.java:111) at com.agile.pc.cmserver.base.BasePageThreeDAO.loadTable(BasePageThreeDAO.java:108) If you are interested in the background of the problem, you may de-compile the class com.agile.pc.cmserver.base.AgileFlexUtil.setFlexValuesForOneRowTable and find the root cause that Agile happens to hit Oracle Database's limitation that more than 1000 values in the "IN" clause. Check here http://ora-01795.ora-code.com If you need Oracle Agile's final solution, please contact Oracle Agile Support. Performance Below two screenshot are jvm heap usage from before-SQL and after-SQL. We can see there is no big memory gap between two cases. So definitely there is no performance impact to Agile Application Server unless you have more than 1000 attributes for EACH of your dozens of  subclasses. And for client, 1000 attributes should not impact the browser's performance because in HTML we only use dt and dd for each attribute's pair: label and value. It is quite lightweight.

    Read the article

  • WhatsApp &amp; Tasker for Android &ndash; Read &amp; Write messages

    - by Shaurya Anand
    So, I finally gave up on all my previous the Microsoft Mobile/Phone OS devices and made my switch to Android this year. I am using my Samsung Galaxy Note GT-N7000 with CyanogenMod 9.1.0 (http://get.cm/get/jenkins/7086/cm-9.1.0-n7000.zip) and ClockworkMod 6.0.1.2 (http://download2.clockworkmod.com/recoveries/recovery-clockwork-6.0.1.2-n7000.zip) since August this year and I am so happy with the performance and the flexibility it offers me. As a software developer by profession, I would expect most of my gadget to be highly customizable and programmable (one time or at intervals) to suit my needs as close as it can. I was introduced to Automation for Android – Tasker (https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en) via reddit (http://www.reddit.com/r/tasker) and the word ‘automation’ was enough for me to dive right into this app. Only automation that I did earlier was switching profiles depending on location on there phones. And now, just imagine a complete set of possibilities that can be automate on the phone or via the phone. I did my research and found a couple of other tools that do the same/as close as what Tasker can do and few of them are even free. There’s one even by Microsoft called on{X} (https://play.google.com/store/apps/details?id=com.microsoft.onx.app&hl=en). Microsoft’s on{X} really caught my eye. You can write code for your phone on the web application by them, deploy it on your phone and even trace the flow all using your PC. Really brilliant, I love the fact that it’s all JavaScript. Here comes the but, it is still very very young and it’s policy of accessing my News Feed on Facebook is not something that I can not digest. On{X} is good, but as I said earlier, the API is not very mature and hence, I gave up on it. I bought Tasker, the best 5,00 € I spent in ages and I want to talk about it in this post. I am still a “noob” while operating this tool, but I tried my shot at automating WhatsApp (https://play.google.com/store/apps/details?id=com.whatsapp&hl=en), a popular messenger for various platform. The requirement for the automation is that, if I send a WhatsApp ‘wru’ message to the phone, it should respond back giving the location and battery level of my phone. It could be useful, if you like to locate your misplaced phone or automatically reply to your partner/friend, honestly, I don’t know what you will use it - through this post, I am just introducing automating WhatsApp using Tasker. Before we begin, the following script only works when your phone is rooted as we will be accessing the WhatsApp database and type some special characters like ‘:’. Let’s follow the code line by line: Profile:         Location request from XYZ. (12) // Name of your profile. Event:         Notification [ Owner Application:WhatsApp Title:* ] // When a new notification comes from WhatsApp, this event is fired. Read the end note, if you face problems with Chrome app after enabling Tasker accessibility. Enter:         A1: Run Shell [ Command:sqlite3 // We will access the WhatsApp database and check if the message comes from designated phone number or not. We mustn’t reply to every message.                 /data/data/com.whatsapp/databases/msgstore.db "SELECT _id, data FROM                  messages WHERE key_from_me='0' AND key_remote_jid LIKE '%XXXXXXXXXXX%' // Replace XXXXXXXXXXX with the phone number of your message sender.                 ORDER BY _id DESC LIMIT 1;" Timeout (Seconds):10 Use Root:On Store // I made a timeout for 10 seconds, if in case WhatsApp is busy accessing the database.                 Result In:%WHATSAPP_CURRREQ ] // Store the read Id and the last message on to the variable %WHATSAPP_CURRREQ         A2: If [ %WHATSAPP_CURRREQ ~R .*[wW][rR][uU].* ] // Check if the pattern of the message is correct and we are all set to send the location.                 A3: If [ %WHATSAPP_CURRREQ !~ %WHATSAPP_LASTREQ ] // Verify that the message is different from the last request. Remember every message has a unique Id.                         A4: Notify [ Title:WhatsApp location request... Text:Sending location // Just a notification that the location message is being prepared.                                 to Krati Gupta... Icon:<icon> Number:0 Permanent:On Priority:3 ] // Make a note it is a permanent notification, we will clear it later.                         A5: Secure Settings [ Configuration:Pattern Lock Disabled // I am disabling the pattern lock, that I use using the plugin Secure Settings.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure // You can download the plugin from here: https://play.google.com/store/apps/details?id=com.intangibleobject.securesettings.plugin&hl=en                                 Settings ]                         A6: Secure Settings [ Configuration:Keyguard Disabled // Disable the keygaurd, it is useful, when your phone is on lock and you want to automate everything, even the typing.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A7: Secure Settings [ Configuration:GPS Enabled // Pretty clear, turn on the GPS and get location at A8                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A8: AutoShortcut [ Configuration:WhatsApp: Some One // I am using AutoShortcut plugin (https://play.google.com/store/apps/details?id=com.joaomgcd.autoshortcut) to start WhatsApp with the indented recipient.                                 Package:com.joaomgcd.autoshortcut Name:AutoShortcut ] // Replace Some One, actually choose it from the plugin, the right recipient.                         A9: Get Location [ Source:Any Timeout (Seconds):30 Continue Task // I am getting the location, timeout is 30 seconds, adjust it accordingly.                                 Immediately:Off Keep Tracking:Off ]                         A10: Secure Settings [ Configuration:Screen Dim // Now, this extension of the plugin Secure Settings, wakes your device so that you can type out the string on the WhatsApp app.                                 5 Seconds Package:com.intangibleobject.securesettings.plugin                                 Name:Secure Settings ]                         A11: Run Shell [ Command:input text // Now, I am using the shell script to type the text to the window, because the ‘:’ while not be typed from the Type task in Tasker.                                 LOCATION:maps.google.com/maps?q=%LOC Timeout (Seconds):0 Use Root:On // And also, this is way faster, but remember you need root for this, not for the other way of typing.                                 Store Result In: ]                         A12: Dpad [ Button:Right Repeat Times:1 ] // Focus the Send button                         A13: Dpad [ Button:Press Repeat Times:1 ] // And press it.                         A14: Dpad [ Button:Left Repeat Times:1 ] // Get back to the typing box.                         A15: Run Shell [ Command:input text LOCATION_ACCURACY:%LOCACC Timeout                                 (Seconds):0 Use Root:On Store Result In: ]                         A16: Dpad [ Button:Right Repeat Times:1 ]                         A17: Dpad [ Button:Press Repeat Times:1 ]                         A18: Dpad [ Button:Left Repeat Times:1 ]                         A19: Run Shell [ Command:input text BATTERY_LEVEL:%BATT% Timeout // I am adding Battery level in my case as well.                                 (Seconds):0 Use Root:On Store Result In: ]                         A20: Dpad [ Button:Right Repeat Times:1 ]                         A21: Dpad [ Button:Press Repeat Times:1 ]                         A22: Variable Set [ Name:%WHATSAPP_LASTREQ To:%WHATSAPP_CURRREQ Do // And now, we say, request is done.                                 Maths:Off Append:Off ]                         A23: Button [ Button:Back ] // I am exiting the WhatsApp nicely and not killing it. If you are the murderer kind, kill it, just know, you don’t have any place in the heaven.                         A24: Button [ Button:Back ]                         A25: Notify Cancel [ Title: Warn Not Exist:Off ] // Remove the permanent notification.                         A26: Notify [ Title:WhatsApp location request Text:Location sent // Make a temporary notification, and say, location is sent.                                 successfully. Icon:<icon> Number:0 Permanent:Off Priority:3 ]                                                         A27: Secure Settings [ Configuration:GPS Disabled // Disable all the horrible things we turned on earlier.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A28: Secure Settings [ Configuration:Pattern Lock Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A29: Secure Settings [ Configuration:Keyguard Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                 A30: End If         A31: End If Download this Task from here: http://db.tt/9vRmbhyb That’s it in the above small example – you can read/write messages from/to WhatsApp app. I am using n7000-cm9.1-cwr6. Oh yea, and if you are having the Talkback auto enabled for Chrome browser, you need to turn Off the Web scripts to run. Tasker is amazing, I have automated a lot of tasks using this tool. I will share a few none generic ones with you in my coming post here.

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • HTG Explains: Why Does Rebooting a Computer Fix So Many Problems?

    - by Chris Hoffman
    Ask a geek how to fix a problem you’ve having with your Windows computer and they’ll likely ask “Have you tried rebooting it?” This seems like a flippant response, but rebooting a computer can actually solve many problems. So what’s going on here? Why does resetting a device or restarting a program fix so many problems? And why don’t geeks try to identify and fix problems rather than use the blunt hammer of “reset it”? This Isn’t Just About Windows Bear in mind that this soltion isn’t just limited to Windows computers, but applies to all types of computing devices. You’ll find the advice “try resetting it” applied to wireless routers, iPads, Android phones, and more. This same advice even applies to software — is Firefox acting slow and consuming a lot of memory? Try closing it and reopening it! Some Problems Require a Restart To illustrate why rebooting can fix so many problems, let’s take a look at the ultimate software problem a Windows computer can face: Windows halts, showing a blue screen of death. The blue screen was caused by a low-level error, likely a problem with a hardware driver or a hardware malfunction. Windows reaches a state where it doesn’t know how to recover, so it halts, shows a blue-screen of death, gathers information about the problem, and automatically restarts the computer for you . This restart fixes the blue screen of death. Windows has gotten better at dealing with errors — for example, if your graphics driver crashes, Windows XP would have frozen. In Windows Vista and newer versions of Windows, the Windows desktop will lose its fancy graphical effects for a few moments before regaining them. Behind the scenes, Windows is restarting the malfunctioning graphics driver. But why doesn’t Windows simply fix the problem rather than restarting the driver or the computer itself?  Well, because it can’t — the code has encountered a problem and stopped working completely, so there’s no way for it to continue. By restarting, the code can start from square one and hopefully it won’t encounter the same problem again. Examples of Restarting Fixing Problems While certain problems require a complete restart because the operating system or a hardware driver has stopped working, not every problem does. Some problems may be fixable without a restart, though a restart may be the easiest option. Windows is Slow: Let’s say Windows is running very slowly. It’s possible that a misbehaving program is using 99% CPU and draining the computer’s resources. A geek could head to the task manager and look around, hoping to locate the misbehaving process an end it. If an average user encountered this same problem, they could simply reboot their computer to fix it rather than dig through their running processes. Firefox or Another Program is Using Too Much Memory: In the past, Firefox has been the poster child for memory leaks on average PCs. Over time, Firefox would often consume more and more memory, getting larger and larger and slowing down. Closing Firefox will cause it to relinquish all of its memory. When it starts again, it will start from a clean state without any leaked memory. This doesn’t just apply to Firefox, but applies to any software with memory leaks. Internet or Wi-Fi Network Problems: If you have a problem with your Wi-Fi or Internet connection, the software on your router or modem may have encountered a problem. Resetting the router — just by unplugging it from its power socket and then plugging it back in — is a common solution for connection problems. In all cases, a restart wipes away the current state of the software . Any code that’s stuck in a misbehaving state will be swept away, too. When you restart, the computer or device will bring the system up from scratch, restarting all the software from square one so it will work just as well as it was working before. “Soft Resets” vs. “Hard Resets” In the mobile device world, there are two types of “resets” you can perform. A “soft reset” is simply restarting a device normally — turning it off and then on again. A “hard reset” is resetting its software state back to its factory default state. When you think about it, both types of resets fix problems for a similar reason. For example, let’s say your Windows computer refuses to boot or becomes completely infected with malware. Simply restarting the computer won’t fix the problem, as the problem is with the files on the computer’s hard drive — it has corrupted files or malware that loads at startup on its hard drive. However, reinstalling Windows (performing a “Refresh or Reset your PC” operation in Windows 8 terms) will wipe away everything on the computer’s hard drive, restoring it to its formerly clean state. This is simpler than looking through the computer’s hard drive, trying to identify the exact reason for the problems or trying to ensure you’ve obliterated every last trace of malware. It’s much faster to simply start over from a known-good, clean state instead of trying to locate every possible problem and fix it. Ultimately, the answer is that “resetting a computer wipes away the current state of the software, including any problems that have developed, and allows it to start over from square one.” It’s easier and faster to start from a clean state than identify and fix any problems that may be occurring — in fact, in some cases, it may be impossible to fix problems without beginning from that clean state. Image Credit: Arria Belli on Flickr, DeclanTM on Flickr     

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Screenshot Tour: Ubuntu Touch 14.04 on a Nexus 7

    - by Chris Hoffman
    Ubuntu 14.04 LTS will “form the basis of the first commercially available Ubuntu tablets,” according to Canonical. We installed Ubuntu Touch 14.04 on our own hardware to see what those tablets will be like. We don’t recommend installing this yourself, as it’s still not a polished, complete experience. We’re using “Ubuntu Touch” as shorthand here — apparently this project’s new name is “Ubuntu For Devices.” The Welcome Screen Ubuntu’s touch interface is all about edge swipes and hidden interface elements — it has a lot in common with Windows 8, actually. You’ll see the welcome screen when you boot up or unlock a Ubuntu tablet or phone. If you have new emails, text messages, or other information, it will appear on this screen along with the time and date. If you don’t, you’ll just see a message saying “No data sources available.” The Dash Swipe in from the right edge of the welcome screen to access the Dash, or home screen. This is actually very similar to the Dash on Ubuntu’s Unity desktop. This isn’t a surprise — Canonical wants the desktop and touch versions of Ubuntu to use the same code. In the future, the desktop and touch versions of Ubuntu will use the same version of Unity and Unity will adjust its interface depending on what type of device your’e using. Here you’ll find apps you have installed and apps available to install. Tap an installed app to launch it or tap an available app to view more details and install it. Tap the My apps or Available headings to view a complete list of apps you have installed or apps you can install. Tap the Search box at the top of the screen to start searching — this is how you’d search for new apps to install. As you’d expect, a touch keyboard appears when you tap in the Search field or any other text field. The launcher isn’t just for apps. Tap the Apps heading at the top of the screen and you’ll see hidden text appear — Music, Video, and Scopes. This hidden navigation is used throughout Ubuntu’s different apps and can be easy to miss at first. Swipe to the left or right to move between these screens. These screens are also similar to the different panels in Unity on the desktop. The Scopes section allows you to view different search scopes you have installed. These are used to search different sources when you start a search from the Dash. Search from the Music or Videos scopes to search for local media files on your device or media files online. For example, searching in the Music scope will show you music results from Grooveshark by default. Navigating Ubuntu Touch Swipe in from the left edge anywhere on the system to open the launcher, a bar with shortcuts to apps. This launcher is very similar to the launcher on the left of Ubuntu’s Unity desktop — that’s the whole idea, after all. Once you’ve opened an app, you can leave the app by swiping in from the left. The launcher will appear — keep moving your finger towards the right edge of teh screen. This will swipe the current app off the screen, taking you back to the Dash. Once back on the Dash, you’ll see your open apps represented as thumbnails under Recent. Tap a thumbnail here to go back to a running app. To remove an app from here, long-press it and tap the X button that appears. Swipe in from the right edge in any app to quickly switch between recent apps. Swipe in from the right edge and hold your finger down to reveal an application switcher that shows all your recent apps and lets you choose between them. Swipe down from the top of the screen to access the indicator panel. Here you can connect to Wi-Fi networks, view upcoming events, control GPS and Bluetooth hardware, adjust sound settings, see incoming messages, and more. This panel is for quick access to hardware settings and notifications, just like the indicators on Ubuntu’s Unity desktop. The Apps System settings not included in the pull-down panel are available in the System Settings app. To access it, tap My apps on the Dash and tap System Settings, search for the System Settings app, or open the launcher bar and tap the settings icon. The settings here a bit limited compared to other operating systems, but many of the important options are available here. You can add Evernote, Ubuntu One, Twitter, Facebook, and Google accounts from here. A free Ubuntu One account is mandatory for downloading and updating apps. A Google account can be used to sync contacts and calendar events. Some apps on Ubuntu are native apps, while many are web apps. For example, the Twitter, Gmail, Amazon, Facebook, and eBay apps included by default are all web apps that open each service’s mobile website as an app. Other applications, such as the Weather, Calendar, Dialer, Calculator, and Notes apps are native applications. Theoretically, both types of apps will be able to scale to different screen resolutions. Ubuntu Touch and Ubuntu desktop may one day share the same apps, which will adapt to different display sizes and input methods. Like Windows 8 apps, Ubuntu apps hide interface elements by default, providing you with a full-screen view of the content. Swipe up from the bottom of an app’s screen to view its interface elements. For example, swiping up from the bottom of the Web Browser app reveals Back, Forward, and Refresh buttons, along with an address bar and Activity button so you can view current and recent web pages. Swipe up even more from the bottom and you’ll see a button hovering in the middle of the app. Tap the button and you’ll see many more settings. This is an overflow area for application options and functions that can’t fit on the navigation bar. The Terminal app has a few surprising Easter eggs in this panel, including a “Hack into the NSA” option. Tap it and the following text will appear in the terminal: That’s not very nice, now tracing your location . . . . . . . . . . . .Trace failed You got away this time, but don’t try again. We’d expect to see such Easter eggs disappear before Ubuntu Touch actually ships on real devices. Ubuntu Touch has come a long way, but it’s still not something you want to use today. For example, it doesn’t even have a built-in email client — you’ll have to us your email service’s mobile website. Few apps are available, and many of the ones that are are just mobile websites. It’s not a polished operating system intended for normal users yet — it’s more of a preview for developers and device manufacturers. If you really want to try it yourself, you can install it on a Wi-Fi Nexus 7 (2013), Nexus 10, or Nexus 4 device. Follow Ubuntu’s installation instructions here.

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • RTS style fog of war woes

    - by Fricken Hamster
    So I'm trying to make a rts style line of sight fog of war style engine for my grid based game. Currently I am getting a set of vertices by raycasting in 360 degree. Then I use that list of vertices to do a graphics style polygon scanline fill to get a list of all points within the polygon. The I compare the new list of seen tiles and compare that with the old one and increment or decrement the world vision array as needed. The polygon scanline function is giving me trouble. I'm mostly following this http://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/PolygonFilling.html So far this is my code without cleaning anything up var edgeMinX:Vector.<int> = new Vector.<int>; var edgeMinY:Vector.<int> = new Vector.<int>; var edgeMaxY:Vector.<int> = new Vector.<int>; var edgeInvSlope:Vector.<Number> = new Vector.<Number>; var ilen:int = outvert.length; var miny:int = -1; var maxy:int = -1; for (i = 0; i < ilen; i++) { var curpoint:Point = outvert[i]; if (i == ilen -1) { var nextpoint:Point = outvert[0]; } else { nextpoint = outvert[i + 1]; } if (nextpoint.y == curpoint.y) { continue; } if (curpoint.y < nextpoint.y) { var curslope:Number = ((nextpoint.y - curpoint.y) / (nextpoint.x - curpoint.x)); edgeMinY.push(curpoint.y); edgeMinX.push(curpoint.x); edgeMaxY.push(nextpoint.y); edgeInvSlope.push(1 / curslope); if (curpoint.y < miny || miny == -1) { miny = curpoint.y; } if (nextpoint.y > maxy) { maxy = nextpoint.y; } } else { curslope = ((curpoint.y - nextpoint.y) / (curpoint.x - nextpoint.x)); edgeMinY.push(nextpoint.y); edgeMinX.push(nextpoint.x); edgeMaxY.push(curpoint.y); edgeInvSlope.push(1 / curslope); if (nextpoint.y < miny || miny == -1) { miny = curpoint.y; } if (curpoint.y > maxy) { maxy = nextpoint.y; } } } var activeMaxY:Vector.<int> = new Vector.<int>; var activeCurX:Vector.<Number> = new Vector.<Number>; var activeInvSlope:Vector.<Number> = new Vector.<Number>; for (var scanline:int = miny; scanline < maxy + 1; scanline++) { ilen = edgeMinY.length; for (i = 0; i < ilen; i++) { if (edgeMinY[i] == scanline) { activeMaxY.push(edgeMaxY[i]); activeCurX.push(edgeMinX[i]); activeInvSlope.push(edgeInvSlope[i]); //trace("added(" + edgeMinX[i]); edgeMaxY.splice(i, 1); edgeMinX.splice(i, 1); edgeMinY.splice(i, 1); edgeInvSlope.splice(i, 1); i--; ilen--; } } ilen = activeCurX.length; for (i = 0; i < ilen - 1; i++) { for (var j:int = i; j < ilen - 1; j++) { if (activeCurX[j] > activeCurX[j + 1]) { var tempint:int = activeMaxY[j]; activeMaxY[j] = activeMaxY[j + 1]; activeMaxY[j + 1] = tempint; var tempnum:Number = activeCurX[j]; activeCurX[j] = activeCurX[j + 1]; activeCurX[j + 1] = tempnum; tempnum = activeInvSlope[j]; activeInvSlope[j] = activeInvSlope[j + 1]; activeInvSlope[j + 1] = tempnum; } } } var prevx:int = -1; var jlen:int = activeCurX.length; for (j = 0; j < jlen; j++) { if (prevx == -1) { prevx = activeCurX[j]; } else { for (var k:int = prevx; k < activeCurX[j]; k++) { graphics.lineStyle(2, 0x124132); graphics.drawCircle(k * 20 + 10, scanline * 20 + 10, 5); if (k == prevx || k > activeCurX[j] - 1) { graphics.lineStyle(3, 0x004132); graphics.drawCircle(k * 20 + 10, scanline * 20 + 10, 2); } prevx = -1; //tileLightList.push(k, scanline); } } } ilen = activeCurX.length; for (i = 0; i < ilen; i++) { if (activeMaxY[i] == scanline + 1) { activeCurX.splice(i, 1); activeMaxY.splice(i, 1); activeInvSlope.splice(i, 1); i--; ilen--; } else { activeCurX[i] += activeInvSlope[i]; } } } It works in some cases but some of the x intersections are skipped, primarily when there are more than 2 x intersections in one scanline I think. Is there a way to fix this, or a better way to do what I described? Thanks

    Read the article

  • how to use 3D map Actionscript class in mxml file for display map.

    - by nemade-vipin
    hello friends, I have created the application in which I have to use 3D map Action Script class in mxml file to display a map in form. that is in tab navigator last tab. My ActionScript 3D map class is(FlyingDirections):- package src.SBTSCoreObject { import src.SBTSCoreObject.JSONDecoder; import com.google.maps.InfoWindowOptions; import com.google.maps.LatLng; import com.google.maps.LatLngBounds; import com.google.maps.Map3D; import com.google.maps.MapEvent; import com.google.maps.MapOptions; import com.google.maps.MapType; import com.google.maps.MapUtil; import com.google.maps.View; import com.google.maps.controls.NavigationControl; import com.google.maps.geom.Attitude; import com.google.maps.interfaces.IPolyline; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.services.Directions; import com.google.maps.services.DirectionsEvent; import com.google.maps.services.Route; import flash.display.Bitmap; import flash.display.DisplayObject; import flash.display.DisplayObjectContainer; import flash.display.Loader; import flash.display.LoaderInfo; import flash.display.Sprite; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.MouseEvent; import flash.events.TimerEvent; import flash.filters.DropShadowFilter; import flash.geom.Point; import flash.net.URLLoader; import flash.net.URLRequest; import flash.net.navigateToURL; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.text.TextFormat; import flash.utils.Timer; import flash.utils.getTimer; public class FlyingDirections extends Map3D { /** * Panoramio home page. */ private static const PANORAMIO_HOME:String = "http://www.panoramio.com/"; /** * The icon for the car. */ [Embed("assets/car-icon-24px.png")] private static const Car:Class; /** * The Panoramio icon. */ [Embed("assets/iw_panoramio.png")] private static const PanoramioIcon:Class; /** * We animate a zoom in to the start the route before the car starts * to move. This constant sets the time in seconds over which this * zoom occurs. */ private static const LEAD_IN_DURATION:Number = 3; /** * Duration of the trip in seconds. */ private static const TRIP_DURATION:Number = 40; /** * Constants that define the geometry of the Panoramio image markers. */ private static const BORDER_T:Number = 3; private static const BORDER_L:Number = 10; private static const BORDER_R:Number = 10; private static const BORDER_B:Number = 3; private static const GAP_T:Number = 2; private static const GAP_B:Number = 1; private static const IMAGE_SCALE:Number = 1; /** * Trajectory that the camera follows over time. Each element is an object * containing properties used to generate parameter values for flyTo(..). * fraction = 0 corresponds to the start of the trip; fraction = 1 * correspondsto the end of the trip. */ private var FLY_TRAJECTORY:Array = [ { fraction: 0, zoom: 6, attitude: new Attitude(0, 0, 0) }, { fraction: 0.2, zoom: 8.5, attitude: new Attitude(30, 30, 0) }, { fraction: 0.5, zoom: 9, attitude: new Attitude(30, 40, 0) }, { fraction: 1, zoom: 8, attitude: new Attitude(50, 50, 0) }, { fraction: 1.1, zoom: 8, attitude: new Attitude(130, 50, 0) }, { fraction: 1.2, zoom: 8, attitude: new Attitude(220, 50, 0) }, ]; /** * Number of panaramio photos for which we load data. We&apos;ll select a * subset of these approximately evenly spaced along the route. */ private static const NUM_GEOTAGGED_PHOTOS:int = 50; /** * Number of panaramio photos that we actually show. */ private static const NUM_SHOWN_PHOTOS:int = 7; /** * Scaling between real trip time and animation time. */ private static const SCALE_TIME:Number = 0.001; /** * getTimer() value at the instant that we start the trip. If this is 0 then * we have not yet started the car moving. */ private var startTimer:int = 0; /** * The current route. */ private var route:Route; /** * The polyline for the route. */ private var polyline:IPolyline; /** * The car marker. */ private var marker:Marker; /** * The cumulative duration in seconds over each step in the route. * cumulativeStepDuration[0] is 0; cumulativeStepDuration[1] adds the * duration of step 0; cumulativeStepDuration[2] adds the duration * of step 1; etc. */ private var cumulativeStepDuration:/*Number*/Array = []; /** * The cumulative distance in metres over each vertex in the route polyline. * cumulativeVertexDistance[0] is 0; cumulativeVertexDistance[1] adds the * distance to vertex 1; cumulativeVertexDistance[2] adds the distance to * vertex 2; etc. */ private var cumulativeVertexDistance:Array; /** * Array of photos loaded from Panoramio. This array has the same format as * the &apos;photos&apos; property within the JSON returned by the Panoramio API * (see http://www.panoramio.com/api/), with additional properties added to * individual photo elements to hold the loader structures that fetch * the actual images. */ private var photos:Array = []; /** * Array of polyline vertices, where each element is in world coordinates. * Several computations can be faster if we can use world coordinates * instead of LatLng coordinates. */ private var worldPoly:/*Point*/Array; /** * Whether the start button has been pressed. */ private var startButtonPressed:Boolean = false; /** * Saved event from onDirectionsSuccess call. */ private var directionsSuccessEvent:DirectionsEvent = null; /** * Start button. */ private var startButton:Sprite; /** * Alpha value used for the Panoramio image markers. */ private var markerAlpha:Number = 0; /** * Index of the current driving direction step. Used to update the * info window content each time we progress to a new step. */ private var currentStepIndex:int = -1; /** * The fly directions map constructor. * * @constructor */ public function FlyingDirections() { key="ABQIAAAA7QUChpcnvnmXxsjC7s1fCxQGj0PqsCtxKvarsoS-iqLdqZSKfxTd7Xf-2rEc_PC9o8IsJde80Wnj4g"; super(); addEventListener(MapEvent.MAP_PREINITIALIZE, onMapPreinitialize); addEventListener(MapEvent.MAP_READY, onMapReady); } /** * Handles map preintialize. Initializes the map center and zoom level. * * @param event The map event. */ private function onMapPreinitialize(event:MapEvent):void { setInitOptions(new MapOptions({ center: new LatLng(-26.1, 135.1), zoom: 4, viewMode: View.VIEWMODE_PERSPECTIVE, mapType:MapType.PHYSICAL_MAP_TYPE })); } /** * Handles map ready and looks up directions. * * @param event The map event. */ private function onMapReady(event:MapEvent):void { enableScrollWheelZoom(); enableContinuousZoom(); addControl(new NavigationControl()); // The driving animation will be updated on every frame. addEventListener(Event.ENTER_FRAME, enterFrame); addStartButton(); // We start the directions loading now, so that we&apos;re ready to go when // the user hits the start button. var directions:Directions = new Directions(); directions.addEventListener( DirectionsEvent.DIRECTIONS_SUCCESS, onDirectionsSuccess); directions.addEventListener( DirectionsEvent.DIRECTIONS_FAILURE, onDirectionsFailure); directions.load("48 Pirrama Rd, Pyrmont, NSW to Byron Bay, NSW"); } /** * Adds a big blue start button. */ private function addStartButton():void { startButton = new Sprite(); startButton.buttonMode = true; startButton.addEventListener(MouseEvent.CLICK, onStartClick); startButton.graphics.beginFill(0x1871ce); startButton.graphics.drawRoundRect(0, 0, 150, 100, 10, 10); startButton.graphics.endFill(); var startField:TextField = new TextField(); startField.autoSize = TextFieldAutoSize.LEFT; startField.defaultTextFormat = new TextFormat("_sans", 20, 0xffffff, true); startField.text = "Start!"; startButton.addChild(startField); startField.x = 0.5 * (startButton.width - startField.width); startField.y = 0.5 * (startButton.height - startField.height); startButton.filters = [ new DropShadowFilter() ]; var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.addChild(startButton); startButton.x = 0.5 * (container.width - startButton.width); startButton.y = 0.5 * (container.height - startButton.height); var panoField:TextField = new TextField(); panoField.autoSize = TextFieldAutoSize.LEFT; panoField.defaultTextFormat = new TextFormat("_sans", 11, 0x000000, true); panoField.text = "Photos provided by Panoramio are under the copyright of their owners."; container.addChild(panoField); panoField.x = container.width - panoField.width - 5; panoField.y = 5; } /** * Handles directions success. Starts flying the route if everything * is ready. * * @param event The directions event. */ private function onDirectionsSuccess(event:DirectionsEvent):void { directionsSuccessEvent = event; flyRouteIfReady(); } /** * Handles click on the start button. Starts flying the route if everything * is ready. */ private function onStartClick(event:MouseEvent):void { startButton.removeEventListener(MouseEvent.CLICK, onStartClick); var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.removeChild(startButton); startButtonPressed = true; flyRouteIfReady(); } /** * If we have loaded the directions and the start button has been pressed * start flying the directions route. */ private function flyRouteIfReady():void { if (!directionsSuccessEvent || !startButtonPressed) { return; } var directions:Directions = directionsSuccessEvent.directions; // Extract the route. route = directions.getRoute(0); // Draws the polyline showing the route. polyline = directions.createPolyline(); addOverlay(directions.createPolyline()); // Creates a car marker that is moved along the route. var car:DisplayObject = new Car(); marker = new Marker(route.startGeocode.point, new MarkerOptions({ icon: car, iconOffset: new Point(-car.width / 2, -car.height) })); addOverlay(marker); transformPolyToWorld(); createCumulativeArrays(); // Load Panoramio data for the region covered by the route. loadPanoramioData(directions.bounds); var duration:Number = route.duration; // Start a timer that will trigger the car moving after the lead in time. var leadInTimer:Timer = new Timer(LEAD_IN_DURATION * 1000, 1); leadInTimer.addEventListener(TimerEvent.TIMER, onLeadInDone); leadInTimer.start(); var flyTime:Number = -LEAD_IN_DURATION; // Set up the camera flight trajectory. for each (var flyStep:Object in FLY_TRAJECTORY) { var time:Number = flyStep.fraction * duration; var center:LatLng = latLngAt(time); var scaledTime:Number = time * SCALE_TIME; var zoom:Number = flyStep.zoom; var attitude:Attitude = flyStep.attitude; var elapsed:Number = scaledTime - flyTime; flyTime = scaledTime; flyTo(center, zoom, attitude, elapsed); } } /** * Loads Panoramio data for the route bounds. We load data about more photos * than we need, then select a subset lying along the route. * @param bounds Bounds within which to fetch images. */ private function loadPanoramioData(bounds:LatLngBounds):void { var params:Object = { order: "popularity", set: "full", from: "0", to: NUM_GEOTAGGED_PHOTOS.toString(10), size: "small", minx: bounds.getWest(), miny: bounds.getSouth(), maxx: bounds.getEast(), maxy: bounds.getNorth() }; var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest( "http://www.panoramio.com/map/get_panoramas.php?" + paramsToString(params)); loader.addEventListener(Event.COMPLETE, onPanoramioDataLoaded); loader.addEventListener(IOErrorEvent.IO_ERROR, onPanoramioDataFailed); loader.load(request); } /** * Transforms the route polyline to world coordinates. */ private function transformPolyToWorld():void { var numVertices:int = polyline.getVertexCount(); worldPoly = new Array(numVertices); for (var i:int = 0; i < numVertices; ++i) { var vertex:LatLng = polyline.getVertex(i); worldPoly[i] = fromLatLngToPoint(vertex, 0); } } /** * Returns the time at which the route approaches closest to the * given point. * @param world Point in world coordinates. * @return Route time at which the closest approach occurs. */ private function getTimeOfClosestApproach(world:Point):Number { var minDistSqr:Number = Number.MAX_VALUE; var numVertices:int = worldPoly.length; var x:Number = world.x; var y:Number = world.y; var minVertex:int = 0; for (var i:int = 0; i < numVertices; ++i) { var dx:Number = worldPoly[i].x - x; var dy:Number = worldPoly[i].y - y; var distSqr:Number = dx * dx + dy * dy; if (distSqr < minDistSqr) { minDistSqr = distSqr; minVertex = i; } } return cumulativeVertexDistance[minVertex]; } /** * Returns the array index of the first element that compares greater than * the given value. * @param ordered Ordered array of elements. * @param value Value to use for comparison. * @return Array index of the first element that compares greater than * the given value. */ private function upperBound(ordered:Array, value:Number, first:int=0, last:int=-1):int { if (last < 0) { last = ordered.length; } var count:int = last - first; var index:int; while (count > 0) { var step:int = count >> 1; index = first + step; if (value >= ordered[index]) { first = index + 1; count -= step - 1; } else { count = step; } } return first; } /** * Selects up to a given number of photos approximately evenly spaced along * the route. * @param ordered Array of photos, each of which is an object with * a property &apos;closestTime&apos;. * @param number Number of photos to select. */ private function selectEvenlySpacedPhotos(ordered:Array, number:int):Array { var start:Number = cumulativeVertexDistance[0]; var end:Number = cumulativeVertexDistance[cumulativeVertexDistance.length - 2]; var closestTimes:Array = []; for each (var photo:Object in ordered) { closestTimes.push(photo.closestTime); } var selectedPhotos:Array = []; for (var i:int = 0; i < number; ++i) { var idealTime:Number = start + ((end - start) * (i + 0.5) / number); var index:int = upperBound(closestTimes, idealTime); if (index < 1) { index = 0; } else if (index >= ordered.length) { index = ordered.length - 1; } else { var errorToPrev:Number = Math.abs(idealTime - closestTimes[index - 1]); var errorToNext:Number = Math.abs(idealTime - closestTimes[index]); if (errorToPrev < errorToNext) { --index; } } selectedPhotos.push(ordered[index]); } return selectedPhotos; } /** * Handles completion of loading the Panoramio index data. Selects from the * returned photo indices a subset of those that lie along the route and * initiates load of each of these. * @param event Load completion event. */ private function onPanoramioDataLoaded(event:Event):void { var loader:URLLoader = event.target as URLLoader; var decoder:JSONDecoder = new JSONDecoder(loader.data as String); var allPhotos:Array = decoder.getValue().photos; for each (var photo:Object in allPhotos) { var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); photo.closestTime = getTimeOfClosestApproach(fromLatLngToPoint(latLng, 0)); } allPhotos.sortOn("closestTime", Array.NUMERIC); photos = selectEvenlySpacedPhotos(allPhotos, NUM_SHOWN_PHOTOS); for each (photo in photos) { var photoLoader:Loader = new Loader(); // The images aren&apos;t on panoramio.com: we can&apos;t acquire pixel access // using "new LoaderContext(true)". photoLoader.load( new URLRequest(photo.photo_file_url)); photo.loader = photoLoader; // Save the loader info: we use this to find the original element when // the load completes. photo.loaderInfo = photoLoader.contentLoaderInfo; photoLoader.contentLoaderInfo.addEventListener( Event.COMPLETE, onPhotoLoaded); } } /** * Creates a MouseEvent listener function that will navigate to the given * URL in a new window. * @param url URL to which to navigate. */ private function createOnClickUrlOpener(url:String):Function { return function(event:MouseEvent):void { navigateToURL(new URLRequest(url)); }; } /** * Handles completion of loading an individual Panoramio image. * Adds a custom marker that displays the image. Initially this is made * invisible so that it can be faded in as needed. * @param event Load completion event. */ private function onPhotoLoaded(event:Event):void { var loaderInfo:LoaderInfo = event.target as LoaderInfo; // We need to find which photo element this image corresponds to. for each (var photo:Object in photos) { if (loaderInfo == photo.loaderInfo) { var imageMarker:Sprite = createImageMarker(photo.loader, photo.owner_name, photo.owner_url); var options:MarkerOptions = new MarkerOptions({ icon: imageMarker, hasShadow: true, iconAlignment: MarkerOptions.ALIGN_BOTTOM | MarkerOptions.ALIGN_LEFT }); var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); var marker:Marker = new Marker(latLng, options); photo.marker = marker; addOverlay(marker); // A hack: we add the actual image after the overlay has been added, // which creates the shadow, so that the shadow is valid even if we // don&apos;t have security privileges to generate the shadow from the // image. marker.foreground.visible = false; marker.shadow.alpha = 0; var imageHolder:Sprite = new Sprite(); imageHolder.addChild(photo.loader); imageHolder.buttonMode = true; imageHolder.addEventListener( MouseEvent.CLICK, createOnClickUrlOpener(photo.photo_url)); imageMarker.addChild(imageHolder); return; } } trace("An image was loaded which could not be found in the photo array."); } /** * Creates a custom marker showing an image. */ private function createImageMarker(child:DisplayObject, ownerName:String, ownerUrl:String):Sprite { var content:Sprite = new Sprite(); var panoramioIcon:Bitmap = new PanoramioIcon(); var iconHolder:Sprite = new Sprite(); iconHolder.addChild(panoramioIcon); iconHolder.buttonMode = true; iconHolder.addEventListener(MouseEvent.CLICK, onPanoramioIconClick); panoramioIcon.x = BORDER_L; panoramioIcon.y = BORDER_T; content.addChild(iconHolder); // NOTE: we add the image as a child only after we&apos;ve added the marker // to the map. Currently the API requires this if it&apos;s to generate the // shadow for unprivileged content. // Shrink the image, so that it doesn&apos;t obcure too much screen space. // Ideally, we&apos;d subsample, but we don&apos;t have pixel level access. child.scaleX = IMAGE_SCALE; child.scaleY = IMAGE_SCALE; var imageW:Number = child.width; var imageH:Number = child.height; child.x = BORDER_L + 30; child.y = BORDER_T + iconHolder.height + GAP_T; var authorField:TextField = new TextField(); authorField.autoSize = TextFieldAutoSize.LEFT; authorField.defaultTextFormat = new TextFormat("_sans", 12); authorField.text = "author:"; content.addChild(authorField); authorField.x = BORDER_L; authorField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var ownerField:TextField = new TextField(); ownerField.autoSize = TextFieldAutoSize.LEFT; var textFormat:TextFormat = new TextFormat("_sans", 14, 0x0e5f9a); ownerField.defaultTextFormat = textFormat; ownerField.htmlText = "<a href=\"" + ownerUrl + "\" target=\"_blank\">" + ownerName + "</a>"; content.addChild(ownerField); ownerField.x = BORDER_L + authorField.width; ownerField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var totalW:Number = BORDER_L + Math.max(imageW, ownerField.width + authorField.width) + BORDER_R; var totalH:Number = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B + ownerField.height + BORDER_B; content.graphics.beginFill(0xffffff); content.graphics.drawRoundRect(0, 0, totalW, totalH, 10, 10); content.graphics.endFill(); var marker:Sprite = new Sprite(); marker.addChild(content); content.x = 30; content.y = 0; marker.graphics.lineStyle(); marker.graphics.beginFill(0xff0000); marker.graphics.drawCircle(0, totalH + 30, 3); marker.graphics.endFill(); marker.graphics.lineStyle(2, 0xffffff); marker.graphics.moveTo(30 + 10, totalH - 10); marker.graphics.lineTo(0, totalH + 30); return marker; } /** * Handles click on the Panoramio icon. */ private function onPanoramioIconClick(event:MouseEvent):void { navigateToURL(new URLRequest(PANORAMIO_HOME)); } /** * Handles failure of a Panoramio image load. */ private function onPanoramioDataFailed(event:IOErrorEvent):void { trace("Load of image failed: " + event); } /** * Returns a string containing cgi query parameters. * @param Associative array mapping query parameter key to value. * @return String containing cgi query parameters. */ private static function paramsToString(params:Object):String { var result:String = ""; var separator:String = ""; for (var key:String in params) { result += separator + encodeURIComponent(key) + "=" + encodeURIComponent(params[key]); separator = "&"; } return result; } /** * Called once the lead-in flight is done. Starts the car driving along * the route and starts a timer to begin fade in of the Panoramio * images in 1.5 seconds. */ private function onLeadInDone(event:Event):void { // Set startTimer non-zero so that the car starts to move. startTimer = getTimer(); // Start a timer that will fade in the Panoramio images. var fadeInTimer:Timer = new Timer(1500, 1); fadeInTimer.addEventListener(TimerEvent.TIMER, onFadeInTimer); fadeInTimer.start(); } /** * Handles the fade in timer&apos;s TIMER event. Sets markerAlpha above zero * which causes the frame enter handler to fade in the markers. */ private function onFadeInTimer(event:Event):void { markerAlpha = 0.01; } /** * The end time of the flight. */ private function get endTime():Number { if (!cumulativeStepDuration || cumulativeStepDuration.length == 0) { return startTimer; } return startTimer + cumulativeStepDuration[cumulativeStepDuration.length - 1]; } /** * Creates the cumulative arrays, cumulativeStepDuration and * cumulativeVertexDistance. */ private function createCumulativeArrays():void { cumulativeStepDuration = new Array(route.numSteps + 1); cumulativeVertexDistance = new Array(polyline.getVertexCount() + 1); var polylineTotal:Number = 0; var total:Number = 0; var numVertices:int = polyline.getVertexCount(); for (var stepIndex:int = 0; stepIndex < route.numSteps; ++stepIndex) { cumulativeStepDuration[stepIndex] = total; total += route.getStep(stepIndex).duration; var startVertex:int = stepIndex >= 0 ? route.getStep(stepIndex).polylineIndex : 0; var endVertex:int = stepIndex < (route.numSteps - 1) ? route.getStep(stepIndex + 1).polylineIndex : numVertices; var duration:Number = route.getStep(stepIndex).duration; var stepVertices:int = endVertex - startVertex; var latLng:LatLng = polyline.getVertex(startVertex); for (var vertex:int = startVertex; vertex < endVertex; ++vertex) { cumulativeVertexDistance[vertex] = polylineTotal; if (vertex < numVertices - 1) { var nextLatLng:LatLng = polyline.getVertex(vertex + 1); polylineTotal += nextLatLng.distanceFrom(latLng); } latLng = nextLatLng; } } cumulativeStepDuration[stepIndex] = total; } /** * Opens the info window above the car icon that details the given * step of the driving directions. * @param stepIndex Index of the current step. */ private function openInfoForStep(stepIndex:int):void { // Sets the content of the info window. var content:String; if (stepIndex >= route.numSteps) { content = "<b>" + route.endGeocode.address + "</b>" + "<br /><br />" + route.summaryHtml; } else { content = "<b>" + stepIndex + ".</b> " + route.getStep(stepIndex).descriptionHtml; } marker.openInfoWindow(new InfoWindowOptions({ contentHTML: content })); } /** * Displays the driving directions step appropriate for the given time. * Opens the info window showing the step instructions each time we * progress to a new step. * @param time Time for which to display the step. */ private function displayStepAt(time:Number):void { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex >= 0 && stepIndex <= maxStepIndex && currentStepIndex != stepIndex) { openInfoForStep(stepIndex); currentStepIndex = stepIndex; } } /** * Returns the LatLng at which the car should be positioned at the given * time. * @param time Time for which LatLng should be found. * @return LatLng. */ private function latLngAt(time:Number):LatLng { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex < minStepIndex) { return route.startGeocode.point; } else if (stepIndex > maxStepIndex) { return route.endGeocode.point; } var stepStart:Number = cumulativeStepDuration[stepIndex]; var stepEnd:Number = cumulativeStepDuration[stepIndex + 1]; var stepFraction:Number = (time - stepStart) / (stepEnd - stepStart); var startVertex:int = route.getStep(stepIndex).polylineIndex; var endVertex:int = (stepIndex + 1) < route.numSteps ? route.getStep(stepIndex + 1).polylineIndex : polyline.getVertexCount(); var stepVertices:int = endVertex - startVertex; var stepLeng

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • ERROR: Attempted to read or write protected memory. This is often an indication that other memory is corrupt

    - by SPSamL
    I get this error after having edited a few pages in SharePoint 2010. I have to do an IISReset on both front ends to get this to resolve. I don't know how to fix it or even what else to supply here, but please let me know as the resets now happen several times per day. Log Name: Application Source: ASP.NET 2.0.50727.0 Date: 1/26/2011 11:12:48 AM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: PINTSPSFE02.samcstl.org Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 1/26/2011 11:12:48 AM Event time (UTC): 1/26/2011 5:12:48 PM Event ID: c52fb336b7f147a3913fff3617a99d57 Event sequence: 4965 Event occurrence: 2178 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1449762715/ROOT-2-129405348166941887 Trust level: WSS_Minimal Application Virtual Path: / Application Path: C:\inetpub\wwwroot\wss\VirtualDirectories\80\ Machine name: PINTSPSFE02 Process information: Process ID: 5928 Process name: w3wp.exe Account name: SAMC\MossAppPool Exception information: Exception type: AccessViolationException Exception message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Request information: Request URL: http://mosscluster/Pages/Home.aspx Request path: /Pages/Home.aspx User host address: 10.3.60.26 User: SAMC\BARNMD Is authenticated: True Authentication Type: NTLM Thread account name: SAMC\MossAppPool Thread information: Thread ID: 110 Thread account name: SAMC\MossAppPool Is impersonating: False Stack trace: at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Delete(String key, Boolean recursive, DeletionReason reason) at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Get(String key) at Microsoft.Office.Server.ObjectCache.SPCache.Get(String objectTypeName, String id) at Microsoft.Office.Server.Administration.UserProfileServiceProxy.GetPartitionPropertiesCache(Guid applicationID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_PartitionPropertiesCache() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.DataCache.get_PartitionProperties() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, Guid partitionID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, SPServiceContext serviceContext) at Microsoft.Office.Server.WebControls.MyLinksRibbon.EnsureMySiteUrls() at Microsoft.Office.Server.WebControls.MyLinksRibbon.get_PortalMySiteUrlAvailable() at Microsoft.Office.Server.WebControls.MyLinksRibbon.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Custom event details: Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="ASP.NET 2.0.50727.0" /> <EventID Qualifiers="32768">1309</EventID> <Level>3</Level> <Task>3</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-01-26T17:12:48.000000000Z" /> <EventRecordID>35834</EventRecordID> <Channel>Application</Channel> <Computer>PINTSPSFE02.samcstl.org</Computer> <Security /> </System> <EventData> <Data>3005</Data> <Data>An unhandled exception has occurred.</Data> <Data>1/26/2011 11:12:48 AM</Data> <Data>1/26/2011 5:12:48 PM</Data> <Data>c52fb336b7f147a3913fff3617a99d57</Data> <Data>4965</Data> <Data>2178</Data> <Data>0</Data> <Data>/LM/W3SVC/1449762715/ROOT-2-129405348166941887</Data> <Data>WSS_Minimal</Data> <Data>/</Data> <Data>C:\inetpub\wwwroot\wss\VirtualDirectories\80\</Data> <Data>PINTSPSFE02</Data> <Data> </Data> <Data>5928</Data> <Data>w3wp.exe</Data> <Data>SAMC\MossAppPool</Data> <Data>AccessViolationException</Data> <Data></Data> <Data>http://mosscluster/Pages/Home.aspx</Data> <Data>/Pages/Home.aspx</Data> <Data>10.3.60.26</Data> <Data>SAMC\BARNMD</Data> <Data>True</Data> <Data>NTLM</Data> <Data>SAMC\MossAppPool</Data> <Data>110</Data> <Data>SAMC\MossAppPool</Data> <Data>False</Data> <Data> at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Delete(String key, Boolean recursive, DeletionReason reason) at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Get(String key) at Microsoft.Office.Server.ObjectCache.SPCache.Get(String objectTypeName, String id) at Microsoft.Office.Server.Administration.UserProfileServiceProxy.GetPartitionPropertiesCache(Guid applicationID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_PartitionPropertiesCache() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.DataCache.get_PartitionProperties() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, Guid partitionID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, SPServiceContext serviceContext) at Microsoft.Office.Server.WebControls.MyLinksRibbon.EnsureMySiteUrls() at Microsoft.Office.Server.WebControls.MyLinksRibbon.get_PortalMySiteUrlAvailable() at Microsoft.Office.Server.WebControls.MyLinksRibbon.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) </Data> </EventData> </Event>

    Read the article

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

  • Mozilla Weave can't sync Firefox. What's wrong?

    - by Mehper C. Palavuzlar
    For the last few days, Mozilla Weave can't sync. Below is the activity log. Any ideas? 2010-05-02 20:47:15 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-02 20:47:15 Service.Main CONFIG Starting backoff, next sync at:Sun May 02 2010 21:16:09 GMT+0300 (GTB Yaz Saati) 2010-05-02 20:47:15 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-02 21:16:09 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 21:16:30 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 21:16:30 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 21:26:50 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 21:26:50 Engine.Clients INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Clients INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Clients DEBUG Total (ms): sync 6, processIncoming 3, uploadOutgoing 0, syncStartup 3, syncFinish 0 2010-05-02 21:26:50 Engine.Bookmarks INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Bookmarks INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Bookmarks DEBUG Total (ms): sync 13, processIncoming 5, uploadOutgoing 0, syncStartup 3, syncFinish 3 2010-05-02 21:26:50 Engine.Forms INFO 1 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Forms INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Forms INFO Uploading all of 1 records 2010-05-02 21:26:50 Collection DEBUG POST Length: 388 2010-05-02 21:27:06 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/forms 2010-05-02 21:27:06 Engine.Forms DEBUG Total (ms): sync 15924, processIncoming 3, uploadOutgoing 15918, syncStartup 3, syncFinish 0, createRecord 1 2010-05-02 21:27:06 Engine.History INFO 55 outgoing items pre-reconciliation 2010-05-02 21:27:06 Engine.History INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:09 Engine.History INFO Uploading all of 55 records 2010-05-02 21:27:09 Collection DEBUG POST Length: 35337 2010-05-02 21:27:32 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/history 2010-05-02 21:27:32 Engine.History DEBUG Total (ms): sync 25588, processIncoming 4, uploadOutgoing 25580, syncStartup 3, syncFinish 0, createRecord 2540 2010-05-02 21:27:32 Engine.Passwords INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Passwords INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Passwords DEBUG Total (ms): sync 8, processIncoming 4, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Prefs INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Prefs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Prefs DEBUG Total (ms): sync 8, processIncoming 3, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Tabs INFO 1 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Tabs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Tabs INFO Uploading all of 1 records 2010-05-02 21:27:32 Collection DEBUG POST Length: 393 2010-05-02 21:27:54 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/tabs 2010-05-02 21:27:54 Engine.Tabs DEBUG Total (ms): sync 21943, processIncoming 3, uploadOutgoing 21936, syncStartup 3, syncFinish 0, createRecord 8 2010-05-02 21:27:54 Service.Main INFO Sync completed successfully 2010-05-02 22:27:53 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 22:28:14 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 22:28:14 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 22:28:16 Net.Resource DEBUG GET fail 503 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 22:28:16 Service.Main DEBUG Exception: aborting sync, failed to get collections No traceback available 2010-05-02 23:28:15 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-03 00:26:42 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 00:31:03 RecordMgr DEBUG Failed to import record: App. Quitting JS Stack trace: Res__request(...)@resource.js:208 < Res_get()@resource.js:271 < RecordMgr_import("https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global")@wbo.js:119 < WeaveSvc__remoteSetup()@service.js:824 < ()@service.js:1187 < WrappedNotify()@util.js:114 < WrappedLock()@util.js:86 < WrappedCatch()@util.js:65 < sync(false)@service.js:1146 < ([object Object])@service.js:414 < notify([object XPCWrappedNative_NoHelper])@util.js:629 2010-05-03 00:31:03 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2010-05-03 00:31:03 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-03 00:31:03 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-03 17:26:25 Service.Main INFO Loading Weave 1.2.3 2010-05-03 17:26:25 Engine.Bookmarks DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Forms DEBUG Engine initialized 2010-05-03 17:26:25 Engine.History DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Passwords DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Prefs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Resetting tabs last sync time 2010-05-03 17:26:25 Service.Main INFO Mozilla/5.0 (Windows; U; Windows NT 6.1; tr; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) 2010-05-03 17:26:26 Service.Main DEBUG Caching URLs under storage user base: https://sj-weave03.services.mozilla.com/1.0/mehper/ 2010-05-03 17:26:30 Service.Main DEBUG Autoconnecting in 3 seconds 2010-05-03 17:26:36 Service.Main INFO Logging in user mehper 2010-05-03 17:45:46 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 17:53:18 Service.Main DEBUG Exception: Could not acquire lock No traceback available

    Read the article

  • Intermittent Connection Issues to SQL Server 2008 R2 RTM

    - by Chandan Jha
    The problem I am facing is a very complex one and inspite of trying to gather a root cause of the problem, I am standing at the same place after 2 months with just bits and pieces of information.Here is a scenario: There is a windows 2003 server which uses an system DSN ODBC connection. I looked into the driver properties and it is as follows: Name Version File SQL Server 2000.86.3959.00 SQLSRV32.DLL Now, this system DSN has been given configured with TCP\IP in Network Libraries and 'determine port dynamically' is checked. Now, lets come to the database destination. It is hosted on Windows 2008 having SQL 2008 R2 RTM version 64-bit. Now, I will give you a an overview about the events that happen and whatever troubleshooting I could perform: I get an email saying 'blah blah' failed and the only message their application gets is 'cannot connect to database' I go the SQL Server logs and find the following information: Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: 10.0.0.xx Error: 18456, Severity: 14, State: 58.] A quick search shows that this error may come when an SQL Server is configured with windows authentication but its not true. We have mixed mode and connection issue is intermittent. This SQL Server is configured to run on a local system account but since we use only SQL Server accounts to connect to this, there should not be any Kerberos errors. When I run a profiler trace and see only 'existing connections', i see a lot of them coming from my client server displaying the sql user but NO hostname is shown. Textdata field shows TCP\IP information along with some arithabort and ansi-null settings. Now, I tried looking into ring connectivity buffer by using following: SELECTCAST(record AS XML) FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = 'RING_BUFFER_CONNECTIVITY' One sample output is: <ConnectivityTraceRecord> <RecordType>Error</RecordType> <RecordSource>Tds</RecordSource> <Spid>118</Spid> <SniConnectionId>5124905D-D1EC-460E-AD78-201050B78C67</SniConnectionId> <OSError>0</OSError> <SniConsumerError>18452</SniConsumerError> <SniProvider>7</SniProvider> <State>1</State> <RemoteHost>10.0.0.21</RemoteHost> <RemotePort>5008</RemotePort> <LocalHost>10.1.0.38</LocalHost> <LocalPort>1433</LocalPort> <RecordTime>6/6/2012 21:14:57.527</RecordTime> <TdsBuffersInformation> <TdsInputBufferError>0</TdsInputBufferError> <TdsOutputBufferError>0</TdsOutputBufferError> <TdsInputBufferBytes>120</TdsInputBufferBytes> </TdsBuffersInformation> <TdsDisconnectFlags> <PhysicalConnectionIsKilled>0</PhysicalConnectionIsKilled> <DisconnectDueToReadError>0</DisconnectDueToReadError> <NetworkErrorFoundInInputStream>0</NetworkErrorFoundInInputStream> <ErrorFoundBeforeLogin>0</ErrorFoundBeforeLogin> <SessionIsKilled>0</SessionIsKilled> <NormalDisconnect>0</NormalDisconnect> </TdsDisconnectFlags> </ConnectivityTraceRecord> <Stack> <frame id="0">0X000000000174C34B</frame> <frame id="1">0X0000000001748FDD</frame> <frame id="2">0X0000000002461001</frame> <frame id="3">0X0000000000C47E98</frame> <frame id="4">0X00000000008015AD</frame> <frame id="5">0X0000000000801492</frame> <frame id="6">0X00000000003CBBD8</frame> <frame id="7">0X00000000003CB8BA</frame> <frame id="8">0X00000000003CB6FF</frame> <frame id="9">0X00000000008E8FB6</frame> <frame id="10">0X00000000008E9175</frame> <frame id="11">0X00000000008E9839</frame> <frame id="12">0X00000000008E9502</frame> <frame id="13">0X0000000074E437D7</frame> <frame id="14">0X0000000074E43894</frame> <frame id="15">0X00000000775A652D</frame> Somehow all the errors show error number 18452 whereas I never found this error in my SQL logs where I see only 18456. I am just stuck on a dead end because this connection issue appears intermittently. Sorry for a long question but I hope if you read this, you can make out that I tried a lot at my end before giving up.

    Read the article

  • PHP+Apache as forward/reverse proxy: ¿how to process client requests and server responses in PHP?

    - by Lightworker
    Hi! I'm having a lot of troubles with the propper configuration of Apache mod_proxy.so to work as desired... The main idea, is to create a proxy on a local machine in a network wich will have the ability to proces a client request (client connected through this Apache prepared proxy) in PHP. And also, it will have the capacity to process the server responses on PHP too. Those are the 2 funcionalities, and they are independent one from each other. Let me present a little schema of what I need to achive: As you can see here, there're 2 ways: blue one and red one. For the blue one, I basically conected a client (Machine B - cell phone) on my local network (home) and configured it to go thorugh a proxy, wich is the Machine A (personal computer) on the exactly same network. So let's say (not DHCP): Machine A: 192.168.1.40 -- Apache is running on this machine, and configured to listen port 80. Machine B (cell phone): 192.168.1.75 -- configured to go throug a proxy, wich is IP 192.168.1.75 and port 80 (basically, Machine A). After configuring Apache properly, wich is basically to remove the "#" from httpd.conf on the lines for the mod_proxy.so (main worker), mod_proxy_connect.so (SSL, allowCONNECT, ...) and mod_proxy_http.so (needed for handle HTTP request/responses) and having in my case, lines like this: # Implements a proxy/gateway for Apache. Include "conf/extra/httpd-proxy.conf" # Various default settings Include "conf/extra/httpd-default.conf" # Secure (SSL/TLS) connections Include "conf/extra/httpd-ssl.conf" wich gives me the ability to configure the file httpd-proxy.conf to prepare the forward proxy or the reverse proxy. So I'm not sure, if what I need it's a forward proxy or a reverse one. For a forward proxy I've done this: <IfModule proxy_module> <IfModule proxy_http_module> # # FORWARD Proxy # #ProxyRequests Off ProxyRequests On ProxyVia On <Proxy *> Order deny,allow # Allow from all Deny from all Allow from 192.168.1 </Proxy> </IfModule> </IfModule> wich basically passes all the packets normally to the server and back to the client. I can trace it perfectly (and testing that works) looking at the "access.log" from Apache. Any request I make with the cell phone, appears then on the Apache log. So it works. But here come the problem: I need to process those client requests. And I need to do it, in PHP. I have read a lot about this. I've read in detail the oficial site from Apache about mod_proxy. And I've searched a lot on forums, but without luck. So I thought about a first aproximation: 1) Forward proxy in Apache, passes all the packets and it's not possible to process them. This seems to be true, so, what about a reverse proxy? So I envisioned something like: ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass http://www.google.com http://www.yahoo.com ProxyPassReverse http://www.google.com http://www.yahoo.com which is just a test, but this should cause on my cell phone that when trying to navigate to Google, I should be going to Yahoo, isn't it? But not. It doesn't work. So you really see, that ALL the examples on Apache reverse proxy, goes like: ProxyPass /foo http://foo.example.com/bar ProxyPassReverse /foo http://foo.example.com/bar wich means, that any kind of request in a local context, will be solved on a remote location. But what I needed is the inverse! It's that when asking for a remote site on my phone, I solve this request on my local server (the Apache one) to process it with a PHP module. So, if it's a forward proxy, I need to pass through PHP first. If it's a reverse proxy, I need to change the "going" direction to my local server one to process first on PHP. Then comes in mind second option: 2) I've seen something like: <Proxy http://example.com/foo/*> SetOutputFilter INCLUDES </Proxy> And I started to search for SetOutputFilter, SetInputFilter, AddOutputFilter and AddInputFilter. But I don't really know how can I use it. Seems to be good, or a solution to me, cause with somethin' like this, I should can add an Input filter to process on PHP the client requests and send back to the client what I programed/want (not the remote server response) wich is the BLUE path on schema, and I should have the ability to add an Output filter wich seems to give me the ability to process the remote server response befor sending it to the client, wich should be the RED path on the schema. Red path, it's just to read server responses and play with em. But nothing more. The Blue path, it's the important one. Cause I will send to the client whatever I want after procesing the requests. I so sorry for this amazingly big post, but I needed to explain it as well as I can. I hope someone will understand my problem, and will help me to solve it! Lot of thanks in advance!! :)

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • postfix 5.7.1 Relay access denied when sending mail with cron

    - by zensys
    Reluctant to ask because there is so much here about 'postfix relay access denied' but I cannot find my case: I use php (Zend Framework) to send emails outside my network using the Google mail server because I could not send mail outside my server (user: web). However when I sent out an email via cron (user: root, I believe), still using ZF, using the same mail config/credentials, I get the message: '5.7.1 Relay access denied' I guess I need to know one of two things: 1. How can I use the google smtp server from cron 2. What do I need to change in my config to send mail using my own server instead of google Though the answer to 2. is the more structural solution I assume, I am quite happy with an answer to 1. as well because I think Google is better at server maintaince (security/spam) than I am. Below my ZF application.ini mail section, main.cf and master.cf: application.ini: resources.mail.transport.type = smtp resources.mail.transport.auth = login resources.mail.transport.host = "smtp.gmail.com" resources.mail.transport.ssl = tls resources.mail.transport.port = 587 resources.mail.transport.username = [email protected] resources.mail.transport.password = xxxxxxx resources.mail.defaultFrom.email = [email protected] resources.mail.defaultFrom.name = "my company" main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.second-start.nl mydomain = second-start.nl alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, mysql:/etc/postfix/mysql-virtual_email2email.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes smtpd_sasl_authenticated_header = yes # see under Spam smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps virtual_transport = dovecot dovecot_destination_recipient_limit = 1 # Spam disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, check_helo_access hash:/etc/postfix/helo_access, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_mynetworks, reject_non_fqdn_hostname, reject_rbl_client sbl.spamhaus.org, reject_rbl_client zen.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 master.cf: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >