Search Results

Search found 9082 results on 364 pages for 'feature extraction'.

Page 363/364 | < Previous Page | 359 360 361 362 363 364  | Next Page >

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How to deploy the advanced search page using Module in SharePoint 2013

    - by ybbest
    Today, I’d like to show you how to deploy your custom advanced search page using module in Visual Studio 2012.Using a module is the way how SharePoint deploy all the publishing pages to the search centre. Browse to the template under 15 hive of SharePoint2013, then go to the SearchCenterFiles under Features(as shown below).Then open the Files.xml it shows how SharePoint using module to deploy advanced search.You can download the solution here. Now I am going to show you how to deploy your custom advanced search page.The feature is located  in the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\FEATURES\SearchCenterFiles . To deploy SharePoint advanced Search pages, you need to do the following: 1. Create SharePoint2013 project and then create a module item. 2. Find how Out of box SharePoint deploy the Advanced Search Page from Files.xml and copy and paste it into the elements.xml <File Url="advanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns="http://schemas.microsoft.com/WebPart/v2"> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <Title>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Title;</Title> <Description>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Description;</Description> <FrameType>None</FrameType> <AllowMinimize>true</AllowMinimize> <AllowRemove>true</AllowRemove> <IsVisible>true</IsVisible> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_FindDocsWith_Title;</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_NarrowSearch_Title;</ScopeSectionLabelText> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowLanguageOptions> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_AddPropRestrictions_Title;</PropertiesSectionLabelText> </WebPart> ]]> </AllUsersWebPart> </File> 3. Customize your SharePoint advanced Search Page by modifying the Advanced Search Box and Export the webpart and copy the webpart file to the elements under module. 4. Export the web part and copy the content of the web part file to the elements.xml in the module. <File Path="AdvancedSearchPage\advanced.aspx" Url="employeeAdvanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/WebPart/v2"> <Title>Advanced Search Box</Title> <FrameType>None</FrameType> <Description>Displays parameterized search options based on properties and combinations of words.</Description> <IsIncluded>true</IsIncluded> <ZoneID>MainZone</ZoneID> <PartOrder>1</PartOrder> <FrameState>Normal</FrameState> <Height /> <Width /> <AllowRemove>true</AllowRemove> <AllowZoneChange>true</AllowZoneChange> <AllowMinimize>true</AllowMinimize> <AllowConnect>true</AllowConnect> <AllowEdit>true</AllowEdit> <AllowHide>true</AllowHide> <IsVisible>true</IsVisible> <DetailLink /> <HelpLink /> <HelpMode>Modeless</HelpMode> <Dir>Default</Dir> <PartImageSmall /> <MissingAssembly>Cannot import this Web Part.</MissingAssembly> <PartImageLarge /> <IsIncludedFilter /> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Find documents that have...</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <AndQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <PhraseQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <OrQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <NotQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Narrow the search...</ScopeSectionLabelText> <ShowScopes xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowScopes> <ScopeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <DisplayGroup xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Advanced Search</DisplayGroup> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowLanguageOptions> <LanguagesLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ResultTypeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Add property restrictions...</PropertiesSectionLabelText> <Properties xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">&lt;root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;  &lt;LangDefs&gt;    &lt;LangDef DisplayName="Arabic" LangID="ar"/&gt;    &lt;LangDef DisplayName="Bengali" LangID="bn"/&gt;    &lt;LangDef DisplayName="Bulgarian" LangID="bg"/&gt;    &lt;LangDef DisplayName="Catalan" LangID="ca"/&gt;    &lt;LangDef DisplayName="Simplified Chinese" LangID="zh-cn"/&gt;    &lt;LangDef DisplayName="Traditional Chinese" LangID="zh-tw"/&gt;    &lt;LangDef DisplayName="Croatian" LangID="hr"/&gt;    &lt;LangDef DisplayName="Czech" LangID="cs"/&gt;    &lt;LangDef DisplayName="Danish" LangID="da"/&gt;    &lt;LangDef DisplayName="Dutch" LangID="nl"/&gt;    &lt;LangDef DisplayName="English" LangID="en"/&gt;    &lt;LangDef DisplayName="Finnish" LangID="fi"/&gt;    &lt;LangDef DisplayName="French" LangID="fr"/&gt;    &lt;LangDef DisplayName="German" LangID="de"/&gt;    &lt;LangDef DisplayName="Greek" LangID="el"/&gt;    &lt;LangDef DisplayName="Gujarati" LangID="gu"/&gt;    &lt;LangDef DisplayName="Hebrew" LangID="he"/&gt;    &lt;LangDef DisplayName="Hindi" LangID="hi"/&gt;    &lt;LangDef DisplayName="Hungarian" LangID="hu"/&gt;    &lt;LangDef DisplayName="Icelandic" LangID="is"/&gt;    &lt;LangDef DisplayName="Indonesian" LangID="id"/&gt;    &lt;LangDef DisplayName="Italian" LangID="it"/&gt;    &lt;LangDef DisplayName="Japanese" LangID="ja"/&gt;    &lt;LangDef DisplayName="Kannada" LangID="kn"/&gt;    &lt;LangDef DisplayName="Korean" LangID="ko"/&gt;    &lt;LangDef DisplayName="Latvian" LangID="lv"/&gt;    &lt;LangDef DisplayName="Lithuanian" LangID="lt"/&gt;    &lt;LangDef DisplayName="Malay" LangID="ms"/&gt;    &lt;LangDef DisplayName="Malayalam" LangID="ml"/&gt;    &lt;LangDef DisplayName="Marathi" LangID="mr"/&gt;    &lt;LangDef DisplayName="Norwegian" LangID="no"/&gt;    &lt;LangDef DisplayName="Polish" LangID="pl"/&gt;    &lt;LangDef DisplayName="Portuguese" LangID="pt"/&gt;    &lt;LangDef DisplayName="Punjabi" LangID="pa"/&gt;    &lt;LangDef DisplayName="Romanian" LangID="ro"/&gt;    &lt;LangDef DisplayName="Russian" LangID="ru"/&gt;    &lt;LangDef DisplayName="Slovak" LangID="sk"/&gt;    &lt;LangDef DisplayName="Slovenian" LangID="sl"/&gt;    &lt;LangDef DisplayName="Spanish" LangID="es"/&gt;    &lt;LangDef DisplayName="Swedish" LangID="sv"/&gt;    &lt;LangDef DisplayName="Tamil" LangID="ta"/&gt;    &lt;LangDef DisplayName="Telugu" LangID="te"/&gt;    &lt;LangDef DisplayName="Thai" LangID="th"/&gt;    &lt;LangDef DisplayName="Turkish" LangID="tr"/&gt;    &lt;LangDef DisplayName="Ukrainian" LangID="uk"/&gt;    &lt;LangDef DisplayName="Urdu" LangID="ur"/&gt;    &lt;LangDef DisplayName="Vietnamese" LangID="vi"/&gt;  &lt;/LangDefs&gt;  &lt;Languages&gt;    &lt;Language LangRef="en"/&gt;    &lt;Language LangRef="fr"/&gt;    &lt;Language LangRef="de"/&gt;    &lt;Language LangRef="ja"/&gt;    &lt;Language LangRef="zh-cn"/&gt;    &lt;Language LangRef="es"/&gt;    &lt;Language LangRef="zh-tw"/&gt;  &lt;/Languages&gt;  &lt;PropertyDefs&gt;    &lt;PropertyDef Name="Path" DataType="url" DisplayName="URL"/&gt;    &lt;PropertyDef Name="Size" DataType="integer" DisplayName="Size (bytes)"/&gt;    &lt;PropertyDef Name="Write" DataType="datetime" DisplayName="Last Modified Date"/&gt;    &lt;PropertyDef Name="FileName" DataType="text" DisplayName="Name"/&gt;    &lt;PropertyDef Name="Description" DataType="text" DisplayName="Description"/&gt;    &lt;PropertyDef Name="Title" DataType="text" DisplayName="Title"/&gt;    &lt;PropertyDef Name="Author" DataType="text" DisplayName="Author"/&gt;    &lt;PropertyDef Name="DocSubject" DataType="text" DisplayName="Subject"/&gt;    &lt;PropertyDef Name="DocKeywords" DataType="text" DisplayName="Keywords"/&gt;    &lt;PropertyDef Name="DocComments" DataType="text" DisplayName="Comments"/&gt;    &lt;PropertyDef Name="CreatedBy" DataType="text" DisplayName="Created By"/&gt;    &lt;PropertyDef Name="ModifiedBy" DataType="text" DisplayName="Last Modified By"/&gt;    &lt;PropertyDef Name="EmployeeNumber" DataType="text" DisplayName="EmployeeNumber"/&gt;    &lt;PropertyDef Name="EmployeeId" DataType="text" DisplayName="EmployeeId"/&gt;    &lt;PropertyDef Name="EmployeeFirstName" DataType="text" DisplayName="EmployeeFirstName"/&gt;    &lt;PropertyDef Name="EmployeeLastName" DataType="text" DisplayName="EmployeeLastName"/&gt;  &lt;/PropertyDefs&gt;  &lt;ResultTypes&gt;    &lt;ResultType DisplayName="Employee Document" Name="default"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="EmployeeNumber" /&gt;      &lt;PropertyRef Name="EmployeeId" /&gt;      &lt;PropertyRef Name="EmployeeFirstName" /&gt;      &lt;PropertyRef Name="EmployeeLastName" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="All Results"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Documents" Name="documents"&gt;      &lt;KeywordQuery&gt;IsDocument="True"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Word Documents" Name="worddocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="doc" OR FileExtension="docx" OR FileExtension="dot" OR FileExtension="docm" OR FileExtension="odt"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Excel Documents" Name="exceldocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="xls" OR FileExtension="xlsx" OR FileExtension="xlsm" OR FileExtension="xlsb" OR FileExtension="ods"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="PowerPoint Presentations" Name="presentations"&gt;      &lt;KeywordQuery&gt;FileExtension="ppt" OR FileExtension="pptx" OR FileExtension="pptm" OR FileExtension="odp"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;  &lt;/ResultTypes&gt;&lt;/root&gt;</Properties> </WebPart> ]]> </AllUsersWebPart> </File> 5.Deploy your custom solution and you will have a custom advanced search page.

    Read the article

  • Some problems with GridView in webpart with multiple filters.

    - by NF_81
    Hello, I'm currently working on a highly configurable Database Viewer webpart for WSS 3.0 which we are going to need for several customized sharepoint sites. Sorry in advance for the large wall of text, but i fear it's necessary to recap the whole issue. As background information and to describe my problem as good as possible, I'll start by telling you what the webpart shall do: Basically the webpart contains an UpdatePanel, which contains a GridView and an SqlDataSource. The select-query the Datasource uses can be set via webbrowseable properties or received from a consumer method from another webpart. Now i wanted to add a filtering feature to the webpart, so i want a dropdownlist in the headerrow for each column that should be filterable. As the select-query is completely dynamic and i don't know at design time which columns shall be filterable, i decided to add a webbrowseable property to contain an xml-formed string with filter information. So i added the following into OnRowCreated of the gridview: void gridView_RowCreated(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.Header) { for (int i = 0; i < e.Row.Cells.Count; i++) { if (e.Row.Cells[i].GetType() == typeof(DataControlFieldHeaderCell)) { string headerText = ((DataControlFieldHeaderCell)e.Row.Cells[i]).ContainingField.HeaderText; // add sorting functionality if (_allowSorting && !String.IsNullOrEmpty(headerText)) { Label l = new Label(); l.Text = headerText; l.ForeColor = Color.Blue; l.Font.Bold = true; l.ID = "Header" + i; l.Attributes["title"] = "Sort by " + headerText; l.Attributes["onmouseover"] = "this.style.cursor = 'pointer'; this.style.color = 'red'"; l.Attributes["onmouseout"] = "this.style.color = 'blue'"; l.Attributes["onclick"] = "__doPostBack('" + panel.UniqueID + "','SortBy$" + headerText + "');"; e.Row.Cells[i].Controls.Add(l); } // check if this column shall be filterable if (!String.IsNullOrEmpty(filterXmlData)) { XmlNode columnNode = GetColumnNode(headerText); if (columnNode != null) { string dataValueField = columnNode.Attributes["DataValueField"] == null ? "" : columnNode.Attributes["DataValueField"].Value; string filterQuery = columnNode.Attributes["FilterQuery"] == null ? "" : columnNode.Attributes["FilterQuery"].Value; if (!String.IsNullOrEmpty(dataValueField) && !String.IsNullOrEmpty(filterQuery)) { SqlDataSource ds = new SqlDataSource(_conStr, filterQuery); DropDownList cbx = new DropDownList(); cbx.ID = "FilterCbx" + i; cbx.Attributes["onchange"] = "__doPostBack('" + panel.UniqueID + "','SelectionChange$" + headerText + "$' + this.options[this.selectedIndex].value);"; cbx.Width = 150; cbx.DataValueField = dataValueField; cbx.DataSource = ds; cbx.DataBound += new EventHandler(cbx_DataBound); cbx.PreRender += new EventHandler(cbx_PreRender); cbx.DataBind(); e.Row.Cells[i].Controls.Add(cbx); } } } } } } } GetColumnNode() checks in the filter property, if there is a node for the current column, which contains information about the Field the DropDownList should bind to, and the query for filling in the items. In cbx_PreRender() i check ViewState and select an item in case of a postback. In cbx_DataBound() i just add tooltips to the list items as the dropdownlist has a fixed width. Previously, I used AutoPostback and SelectedIndexChanged of the DDL to filter the grid, but to my disappointment it was not always fired. Now i check __EVENTTARGET and __EVENTARGUMENT in OnLoad and call a function when the postback event was due to a selection change in a DDL: private void FilterSelectionChanged(string columnName, string selectedValue) { columnName = "[" + columnName + "]"; if (selectedValue.IndexOf("--") < 0 ) // "-- All --" selected { if (filter.ContainsKey(columnName)) filter[columnName] = "='" + selectedValue + "'"; else filter.Add(columnName, "='" + selectedValue + "'"); } else { filter.Remove(columnName); } gridView.PageIndex = 0; } "filter" is a HashTable which is stored in ViewState for persisting the filters (got this sample somewhere on the web, don't remember where). In OnPreRender of the webpart, i call a function which reads the ViewState and apply the filterExpression to the datasource if there is one. I assume i had to place it here, because if there is another postback (e.g. for sorting) the filters are not applied any more. private void ApplyGridFilter() { string args = " "; int i = 0; foreach (object key in filter.Keys) { if (i == 0) args = key.ToString() + filter[key].ToString(); else args += " AND " + key.ToString() + filter[key].ToString(); i++; } dataSource.FilterExpression = args; ViewState.Add("FilterArgs", filter); } protected override void OnPreRender(EventArgs e) { EnsureChildControls(); if (WebPartManager.DisplayMode.Name == "Edit") { errMsg = "Webpart in Edit mode..."; return; } if (useWebPartConnection == true) // get select-query from consumer webpart { if (provider != null) { dataSource.SelectCommand = provider.strQuery; } } try { int currentPageIndex = gridView.PageIndex; if (!String.IsNullOrEmpty(m_SortExpression)) { gridView.Sort("[" + m_SortExpression + "]", m_SortDirection); } gridView.PageIndex = currentPageIndex; // for some reason, the current pageindex resets after sorting ApplyGridFilter(); gridView.DataBind(); } catch (Exception ex) { Functions.ShowJavaScriptAlert(Page, ex.Message); } base.OnPreRender(e); } So i set the filterExpression and the call DataBind(). I don't know if this is ok on this late stage.. don't have a lot of asp.net experience after all. If anyone can suggest a better solution, please give me a hint. This all works great so far, except when i have two or more filters and set them to a combination that returns zero records. Bam ... gridview gone, completely - without a possiblity of changing the filters back. So i googled and found out that i have to subclass gridview in order to always show the headerrow. I found this solution and implemented it with some modifications. The headerrow get's displayed and i can change the filters even if the returned result contains no rows. But finally to my current problem: When i have two or more filters set which return zero rows, and i change back one filter to something that should return rows, the gridview remains empty (although the pager is rendered). I have to completly refresh the page to reset the filters. When debugging, i can see in the overridden CreateChildControls of the grid, that the base method indeed returns 0, but anyway... the gridView.RowCount remains 0 after databinding. Anyone have an idea what's going wrong here?

    Read the article

  • How to deploy the advanced search page using Module in SharePoint 2013

    - by ybbest
    Today, I’d like to show you how to deploy your custom advanced search page using module in Visual Studio 2012.Using a module is the way how SharePoint deploy all the publishing pages to the search centre. Browse to the template under 15 hive of SharePoint2013, then go to the SearchCenterFiles under Features(as shown below).Then open the Files.xml it shows how SharePoint using module to deploy advanced search.You can download the solution here. Now I am going to show you how to deploy your custom advanced search page.The feature is located  in the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\FEATURES\SearchCenterFiles . To deploy SharePoint advanced Search pages, you need to do the following: 1. Create SharePoint2013 project and then create a module item. 2. Find how Out of box SharePoint deploy the Advanced Search Page from Files.xml and copy and paste it into the elements.xml <File Url="advanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns="http://schemas.microsoft.com/WebPart/v2"> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <Title>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Title;</Title> <Description>$Resources:Microsoft.Office.Server.Search,AdvancedSearch_Webpart_Description;</Description> <FrameType>None</FrameType> <AllowMinimize>true</AllowMinimize> <AllowRemove>true</AllowRemove> <IsVisible>true</IsVisible> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_FindDocsWith_Title;</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_NarrowSearch_Title;</ScopeSectionLabelText> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowLanguageOptions> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">$Resources:Microsoft.Office.Server.Search,AdvancedSearch_AddPropRestrictions_Title;</PropertiesSectionLabelText> </WebPart> ]]> </AllUsersWebPart> </File> 3. Customize your SharePoint advanced Search Page by modifying the Advanced Search Box and Export the webpart and copy the webpart file to the elements under module. 4. Export the web part and copy the content of the web part file to the elements.xml in the module. <File Path="AdvancedSearchPage\advanced.aspx" Url="employeeAdvanced.aspx" Type="GhostableInLibrary"> <Property Name="PublishingPageLayout" Value="~SiteCollection/_catalogs/masterpage/AdvancedSearchLayout.aspx, $Resources:Microsoft.Office.Server.Search,SearchCenterAdvancedSearchTitle;" /> <Property Name="Title" Value="$Resources:Microsoft.Office.Server.Search,Search_Advanced_Page_Title;" /> <Property Name="ContentType" Value="$Resources:Microsoft.Office.Server.Search,contenttype_welcomepage_name;" /> <AllUsersWebPart WebPartZoneID="MainZone" WebPartOrder="1"> <![CDATA[ <WebPart xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/WebPart/v2"> <Title>Advanced Search Box</Title> <FrameType>None</FrameType> <Description>Displays parameterized search options based on properties and combinations of words.</Description> <IsIncluded>true</IsIncluded> <ZoneID>MainZone</ZoneID> <PartOrder>1</PartOrder> <FrameState>Normal</FrameState> <Height /> <Width /> <AllowRemove>true</AllowRemove> <AllowZoneChange>true</AllowZoneChange> <AllowMinimize>true</AllowMinimize> <AllowConnect>true</AllowConnect> <AllowEdit>true</AllowEdit> <AllowHide>true</AllowHide> <IsVisible>true</IsVisible> <DetailLink /> <HelpLink /> <HelpMode>Modeless</HelpMode> <Dir>Default</Dir> <PartImageSmall /> <MissingAssembly>Cannot import this Web Part.</MissingAssembly> <PartImageLarge /> <IsIncludedFilter /> <Assembly>Microsoft.Office.Server.Search, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly> <TypeName>Microsoft.Office.Server.Search.WebControls.AdvancedSearchBox</TypeName> <SearchResultPageURL xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">results.aspx</SearchResultPageURL> <TextQuerySectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Find documents that have...</TextQuerySectionLabelText> <ShowAndQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowAndQueryTextBox> <AndQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPhraseQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPhraseQueryTextBox> <PhraseQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowOrQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowOrQueryTextBox> <OrQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowNotQueryTextBox xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowNotQueryTextBox> <NotQueryTextBoxLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ScopeSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Narrow the search...</ScopeSectionLabelText> <ShowScopes xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowScopes> <ScopeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <DisplayGroup xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Advanced Search</DisplayGroup> <ShowLanguageOptions xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">false</ShowLanguageOptions> <LanguagesLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowResultTypePicker xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowResultTypePicker> <ResultTypeLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox" /> <ShowPropertiesSection xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">true</ShowPropertiesSection> <PropertiesSectionLabelText xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">Add property restrictions...</PropertiesSectionLabelText> <Properties xmlns="urn:schemas-microsoft-com:AdvancedSearchBox">&lt;root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;  &lt;LangDefs&gt;    &lt;LangDef DisplayName="Arabic" LangID="ar"/&gt;    &lt;LangDef DisplayName="Bengali" LangID="bn"/&gt;    &lt;LangDef DisplayName="Bulgarian" LangID="bg"/&gt;    &lt;LangDef DisplayName="Catalan" LangID="ca"/&gt;    &lt;LangDef DisplayName="Simplified Chinese" LangID="zh-cn"/&gt;    &lt;LangDef DisplayName="Traditional Chinese" LangID="zh-tw"/&gt;    &lt;LangDef DisplayName="Croatian" LangID="hr"/&gt;    &lt;LangDef DisplayName="Czech" LangID="cs"/&gt;    &lt;LangDef DisplayName="Danish" LangID="da"/&gt;    &lt;LangDef DisplayName="Dutch" LangID="nl"/&gt;    &lt;LangDef DisplayName="English" LangID="en"/&gt;    &lt;LangDef DisplayName="Finnish" LangID="fi"/&gt;    &lt;LangDef DisplayName="French" LangID="fr"/&gt;    &lt;LangDef DisplayName="German" LangID="de"/&gt;    &lt;LangDef DisplayName="Greek" LangID="el"/&gt;    &lt;LangDef DisplayName="Gujarati" LangID="gu"/&gt;    &lt;LangDef DisplayName="Hebrew" LangID="he"/&gt;    &lt;LangDef DisplayName="Hindi" LangID="hi"/&gt;    &lt;LangDef DisplayName="Hungarian" LangID="hu"/&gt;    &lt;LangDef DisplayName="Icelandic" LangID="is"/&gt;    &lt;LangDef DisplayName="Indonesian" LangID="id"/&gt;    &lt;LangDef DisplayName="Italian" LangID="it"/&gt;    &lt;LangDef DisplayName="Japanese" LangID="ja"/&gt;    &lt;LangDef DisplayName="Kannada" LangID="kn"/&gt;    &lt;LangDef DisplayName="Korean" LangID="ko"/&gt;    &lt;LangDef DisplayName="Latvian" LangID="lv"/&gt;    &lt;LangDef DisplayName="Lithuanian" LangID="lt"/&gt;    &lt;LangDef DisplayName="Malay" LangID="ms"/&gt;    &lt;LangDef DisplayName="Malayalam" LangID="ml"/&gt;    &lt;LangDef DisplayName="Marathi" LangID="mr"/&gt;    &lt;LangDef DisplayName="Norwegian" LangID="no"/&gt;    &lt;LangDef DisplayName="Polish" LangID="pl"/&gt;    &lt;LangDef DisplayName="Portuguese" LangID="pt"/&gt;    &lt;LangDef DisplayName="Punjabi" LangID="pa"/&gt;    &lt;LangDef DisplayName="Romanian" LangID="ro"/&gt;    &lt;LangDef DisplayName="Russian" LangID="ru"/&gt;    &lt;LangDef DisplayName="Slovak" LangID="sk"/&gt;    &lt;LangDef DisplayName="Slovenian" LangID="sl"/&gt;    &lt;LangDef DisplayName="Spanish" LangID="es"/&gt;    &lt;LangDef DisplayName="Swedish" LangID="sv"/&gt;    &lt;LangDef DisplayName="Tamil" LangID="ta"/&gt;    &lt;LangDef DisplayName="Telugu" LangID="te"/&gt;    &lt;LangDef DisplayName="Thai" LangID="th"/&gt;    &lt;LangDef DisplayName="Turkish" LangID="tr"/&gt;    &lt;LangDef DisplayName="Ukrainian" LangID="uk"/&gt;    &lt;LangDef DisplayName="Urdu" LangID="ur"/&gt;    &lt;LangDef DisplayName="Vietnamese" LangID="vi"/&gt;  &lt;/LangDefs&gt;  &lt;Languages&gt;    &lt;Language LangRef="en"/&gt;    &lt;Language LangRef="fr"/&gt;    &lt;Language LangRef="de"/&gt;    &lt;Language LangRef="ja"/&gt;    &lt;Language LangRef="zh-cn"/&gt;    &lt;Language LangRef="es"/&gt;    &lt;Language LangRef="zh-tw"/&gt;  &lt;/Languages&gt;  &lt;PropertyDefs&gt;    &lt;PropertyDef Name="Path" DataType="url" DisplayName="URL"/&gt;    &lt;PropertyDef Name="Size" DataType="integer" DisplayName="Size (bytes)"/&gt;    &lt;PropertyDef Name="Write" DataType="datetime" DisplayName="Last Modified Date"/&gt;    &lt;PropertyDef Name="FileName" DataType="text" DisplayName="Name"/&gt;    &lt;PropertyDef Name="Description" DataType="text" DisplayName="Description"/&gt;    &lt;PropertyDef Name="Title" DataType="text" DisplayName="Title"/&gt;    &lt;PropertyDef Name="Author" DataType="text" DisplayName="Author"/&gt;    &lt;PropertyDef Name="DocSubject" DataType="text" DisplayName="Subject"/&gt;    &lt;PropertyDef Name="DocKeywords" DataType="text" DisplayName="Keywords"/&gt;    &lt;PropertyDef Name="DocComments" DataType="text" DisplayName="Comments"/&gt;    &lt;PropertyDef Name="CreatedBy" DataType="text" DisplayName="Created By"/&gt;    &lt;PropertyDef Name="ModifiedBy" DataType="text" DisplayName="Last Modified By"/&gt;    &lt;PropertyDef Name="EmployeeNumber" DataType="text" DisplayName="EmployeeNumber"/&gt;    &lt;PropertyDef Name="EmployeeId" DataType="text" DisplayName="EmployeeId"/&gt;    &lt;PropertyDef Name="EmployeeFirstName" DataType="text" DisplayName="EmployeeFirstName"/&gt;    &lt;PropertyDef Name="EmployeeLastName" DataType="text" DisplayName="EmployeeLastName"/&gt;  &lt;/PropertyDefs&gt;  &lt;ResultTypes&gt;    &lt;ResultType DisplayName="Employee Document" Name="default"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="EmployeeNumber" /&gt;      &lt;PropertyRef Name="EmployeeId" /&gt;      &lt;PropertyRef Name="EmployeeFirstName" /&gt;      &lt;PropertyRef Name="EmployeeLastName" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="All Results"&gt;      &lt;KeywordQuery/&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Documents" Name="documents"&gt;      &lt;KeywordQuery&gt;IsDocument="True"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Word Documents" Name="worddocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="doc" OR FileExtension="docx" OR FileExtension="dot" OR FileExtension="docm" OR FileExtension="odt"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="Excel Documents" Name="exceldocuments"&gt;      &lt;KeywordQuery&gt;FileExtension="xls" OR FileExtension="xlsx" OR FileExtension="xlsm" OR FileExtension="xlsb" OR FileExtension="ods"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;    &lt;ResultType DisplayName="PowerPoint Presentations" Name="presentations"&gt;      &lt;KeywordQuery&gt;FileExtension="ppt" OR FileExtension="pptx" OR FileExtension="pptm" OR FileExtension="odp"&lt;/KeywordQuery&gt;      &lt;PropertyRef Name="Author" /&gt;      &lt;PropertyRef Name="DocComments"/&gt;      &lt;PropertyRef Name="Description" /&gt;      &lt;PropertyRef Name="DocKeywords"/&gt;      &lt;PropertyRef Name="FileName" /&gt;      &lt;PropertyRef Name="Size" /&gt;      &lt;PropertyRef Name="DocSubject"/&gt;      &lt;PropertyRef Name="Path" /&gt;      &lt;PropertyRef Name="Write" /&gt;      &lt;PropertyRef Name="CreatedBy" /&gt;      &lt;PropertyRef Name="ModifiedBy" /&gt;      &lt;PropertyRef Name="Title"/&gt;    &lt;/ResultType&gt;  &lt;/ResultTypes&gt;&lt;/root&gt;</Properties> </WebPart> ]]> </AllUsersWebPart> </File> 5.Deploy your custom solution and you will have a custom advanced search page.

    Read the article

  • Delphi - Proper way to page though data.

    - by Brad
    I have a string list (TStrings) that has a couple thousand items in it. I need to process them in groups of 100. I basically want to know what the best way to do the loop is in Delphi. I'm hitting a brick wall when I'm trying to figure it out. Thanks unit Unit2; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm2 = class(TForm) Memo1: TMemo; Memo2: TMemo; Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form2: TForm2; implementation Uses math; {$R *.dfm} procedure TForm2.Button1Click(Sender: TObject); var I:Integer; pages:Integer; str:string; begin pages:= ceil(memo1.Lines.Count/100) ; memo2.Lines.add('Total Pages: '+inttostr(pages)); memo2.Lines.add('Total Items: '+inttostr(memo1.Lines.Count)); // Should just do in batches of 100 VS entire list for I := 0 to memo1.lines.Count - 1 do begin if str '' then str:= str+#10+ memo1.Lines.Strings[i] else str:= memo1.Lines.Strings[i]; end; //I need to stop here every 100 items, then process the items. memo2.Lines.Add(str); end; end. Example form object Form2: TForm2 Left = 0 Top = 0 Caption = 'Form2' ClientHeight = 245 ClientWidth = 527 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False PixelsPerInch = 96 TextHeight = 13 object Memo1: TMemo Left = 16 Top = 8 Width = 209 Height = 175 Lines.Strings = ( '4xlt columbia thunder storm jacket' '5 things about thunder storms' 'a thunder storm with a lot of thunder ' 'and lighting sccreensaver' 'a thunder storm with a lot of thunder ' 'and lighting screensaver with no nag ' 'screens' 'all about thunder storms' 'all about thunderstorms for kids' 'amazing tornado videos and ' 'thunderstorm videos' 'are thunder storms louder in ohio?' 'bad thunder storms' 'bathing in thunder storm' 'best thunderstorm pictures' 'cartoon thunder storms' 'celtic thunder storm' 'central valley thunder storm' 'chicago thunderstorm pictures' 'cool thunderstorm pictures' 'current thunderstorm warnings' 'does thunder storms in december mean ' 'snow will be coming' 'facts about thunderstorms for kids' 'facts on thunderstorms for kids' 'fedex thunderstorm video' 'florida thunderstorms facts' 'free relaxing thunderstorm music' 'free soothing thunderstorm sounds ' 'online' 'free thunderstorm mp3' 'free thunderstorm mp3 download' 'free thunderstorm mp3 downloads' 'free thunderstorm mp3s' 'free thunderstorm music' 'free thunderstorm pictures' 'free thunderstorm sound effects' 'free thunderstorm sounds' 'free thunderstorm sounds cd' 'free thunderstorm sounds mp3' 'free thunderstorm sounds online' 'free thunderstorm soundscape' 'free thunderstorm video' 'free thunderstorm video download' 'free thunderstorm videos' 'god of storm and thunder' 'horses storm thunder rain' 'how do thunder storms form' 'how far away is a thunder storm' 'how long do thunder storms last' 'ice cube in a thunder storm' 'indoor thunderstorm safety tips' 'information about thunderstorms for kids' 'interesting thunderstorm facts' 'is it dangerous to shower during thunder ' 'storm' 'is there frequently thunder during snow ' 'storms' 'isolated thunderstorms' 'it'#39's just a thunder storm baby there is ' 'nothing you should fear lyrics' 'lightning & thunder storm safety' 'lightning and thunderstorm facts' 'lightning and thunderstorms facts' 'lightning and thunderstorms for kids' 'listen to thunderstorm sounds online' 'mississauga thunder storm' 'nature sounds free mp3 thunder storm' 'only about thunderstorms facts' 'original storm deep thunderstick' 'phone use during thunder storms' 'pictures of thunderstorms' 'pocono thunder storm' 'posters of thunder storms' 'power rangers ninja storm' 'power rangers thunder storm' 'power rangers thunder storm cast' 'power rangers thunder storm games' 'power rangers thunder storm morphers' 'power rangers thunder storm part 1' 'power rangers thunder storm part 2' 'power rangers thunderstorm' 'power rangers thunderstorm cannon' 'power rangers thunderstorm deluxe ' 'megazord' 'power rangers thunderstorm games' 'power rangers thunderstorm megazord' 'power rangers thunderstorm part 2' 'power rangers thunderstorm pictures' 'power rnager ninja storm thunder staff' 'powerful thunder and lightning storms' 'precambrian thunder storms' 'rain thunderstorm mp3' 'rain thunderstorm pictures' 'relaxing thunderstorm music' 'reminds me of ohio river thunder lighten ' 'storms' 'sacramento thunder storm' 'safety tips for when your caught in a ' 'thunder storm' 'scattered thunderstorms' 'schemer puts his head in the thunder ' 'storm' 'sedative thunder storm' 'server thunder storms' 'severe supercell thunderstorm pictures' 'severe thunder storm pictures' 'severe thunder storms' 'severe thunderstorm facts' 'severe thunderstorm pictures' 'severe thunderstorm pictures hail' 'severe thunderstorm pictures in alberta' 'severe thunderstorm pictures tornado' 'severe thunderstorm safety' 'severe thunderstorm safety tips' 'severe thunderstorm videos' 'severe thunderstorm warning' 'severe thunderstorm warning los ' 'angeles' 'severe thunderstorm warning signs' 'severe thunderstorm warnings' 'severe thunderstorms' 'severe thunderstorms facts' 'shakespeare use thunder storm for ' 'cosmic disorder julius caesar' 'soothing thunderstorm sounds online' 'sound effects of severe thunder storm' 'sound of rain storm finger snapping ' 'thunder chorus' 'split thunder storm' 'storm 3d thunder power' 'storm dark thunder' 'storm dark thunder bowling ball' 'storm dark thunder bowling ball sale' 'storm dark thunder for sale' 'storm dark thunder pearl' 'storm dark thunder pearl bowling ball' 'storm dark thunder review' 'storm dark thunder shirt' 'storm dark thunderball' 'storm deep thunder' 'storm deep thunder 11' 'storm deep thunder 15' 'storm deep thunder 15 lure' 'storm deep thunder 2' 'storm deep thunder lures' 'storm deep thunderstick' 'storm deep thunderstick crankbaits' 'storm deep thunderstick dts09' 'storm deep thunderstick jr' 'storm deep thunderstick lures' 'storm deep thundersticks' 'storm rolling thunder 3 ball roller' 'storm rolling thunder bowling bag' 'storm rolling thunder three ball bowling ' 'bag' 'storm shallow thunder' 'storm shallow thunder 15' 'storm thunder claw' 'storm thunder craw' 'storm watches thunder' 'storms with constant lightning and ' 'thunder non-stop' 'supercell thunder storms' 'supercell thunderstorm pictures' 'supercell thunderstorms' 'swimming pools thunder storms' 'tampa + lightning strikes + thunder ' 'storms' 'texas thunderstorm pictures' 'texas thunderstorm warnings' 'thunder and lightning storm' 'thunder and lighting storms' 'thunder and lightning storms' 'thunder bay snow storm video' 'thunder storm' 'thunder storm and windmill' 'thunder storm cd' 'thunder storm cloud' 'thunder storm clouds' 'thunder storm dog peppermint oil' 'thunder storm in winter' 'thunder storm in winter and weather ' 'prediction' 'thunder storm lx-3 & road blaster psx ' 'download' 'thunder storm occurances' 'thunder storm photos' 'thunder storm poems' 'thunder storm safety' 'thunder storm sign' 'thunder storm sounds' 'thunder storms' 'thunder storms and deaths' 'thunder storms and ilghting' 'thunder storms and lighting' 'thunder storms cd' 'thunder storms in the arctic arctic ' 'weather' 'thunder storms in winter' 'thunder storms on you tub' 'thunder storms pics' 'thunder storms with rain' 'thunderstorm' 'thunderstorm backgrounds' 'thunderstorm capital' 'thunderstorm capital 2008 dorfman' 'thunderstorm capital in boston' 'thunderstorm capital llc' 'thunderstorm capital of canada' 'thunderstorm capital of the us' 'thunderstorm capital of the world' 'thunderstorm facts' 'thunderstorm facts for kids' 'thunderstorm facts hail' 'thunderstorm facts tornadoes' 'thunderstorm mp3' 'thunderstorm mp3 download' 'thunderstorm mp3 download free' 'thunderstorm mp3 downloads' 'thunderstorm mp3 downloads free' 'thunderstorm mp3 files' 'thunderstorm mp3 free' 'thunderstorm mp3 free download' 'thunderstorm mp3 free downloads' 'thunderstorm mp3 torrent' 'thunderstorm mp3s' 'thunderstorm music' 'thunderstorm music cd' 'thunderstorm music downloads' 'thunderstorm music free' 'thunderstorm music playlists' 'thunderstorm music rain' 'thunderstorm pics' 'thunderstorm pictures' 'thunderstorm pictures for kids' 'thunderstorm safety' 'thunderstorm safety for kids' 'thunderstorm safety precautions' 'thunderstorm safety procedures' 'thunderstorm safety rules' 'thunderstorm safety tips' 'thunderstorm safety tips for kids' 'thunderstorm safety tips shelter' 'thunderstorm safety tips trees' 'thunderstorm sound effects' 'thunderstorm sound effects cd' 'thunderstorm sound effects download' 'thunderstorm sound effects free' 'thunderstorm sound effects free ' 'download' 'thunderstorm sound effects free music ' 'feature audio' 'thunderstorm sound effects mp3' 'thunderstorm sound effects rain' 'thunderstorm sounds' 'thunderstorm sounds cd' 'thunderstorm sounds download' 'thunderstorm sounds for sleep' 'thunderstorm sounds for sleeping' 'thunderstorm sounds free' 'thunderstorm sounds free download' 'thunderstorm sounds free downloads' 'thunderstorm sounds mp3' 'thunderstorm sounds mp3 download' 'thunderstorm sounds mp3 free' 'thunderstorm sounds online' 'thunderstorm sounds online for free' 'thunderstorm sounds online free' 'thunderstorm sounds sleep' 'thunderstorm sounds streaming' 'thunderstorm sounds torrent' 'thunderstorm soundscape' 'thunderstorm soundscapes' 'thunderstorm video' 'thunderstorm video clips' 'thunderstorm video download' 'thunderstorm video downloads' 'thunderstorm videos' 'thunderstorm videos for kids' 'thunderstorm videos lightning' 'thunderstorm videos online' 'thunderstorm wallpaper' 'thunderstorm warning' 'thunderstorm warning brisbane' 'thunderstorm warning definition' 'thunderstorm warning los angeles' 'thunderstorm warning san diego' 'thunderstorm warning san mateo county' 'thunderstorm warning santa barbara' 'thunderstorm warning santa clara' 'thunderstorm warning santa clara ' 'county' 'thunderstorm warning signal' 'thunderstorm warning signs' 'thunderstorm warning vs watch' 'thunderstorm warnings' 'thunderstorm warnings and watches' 'thunderstorm warnings for nj' 'thunderstorm warnings qld' 'thunderstorms' 'thunderstorms facts' 'thunderstorms facts for kids' 'thunderstorms for kids' 'tornados and thunder storms animated' 'understanding thunderstorms for kids' 'watch thunderstorm videos' 'weather underground forecast ' 'thunderstorms' 'what causes thunder storms' 'what is a thunder storm' 'where d thunder storms occur') TabOrder = 0 end object Memo2: TMemo Left = 240 Top = 8 Width = 265 Height = 129 Lines.Strings = ( 'Memo2') TabOrder = 1 end object Button1: TButton Left = 384 Top = 184 Width = 75 Height = 25 Caption = 'Button1' TabOrder = 2 OnClick = Button1Click end end

    Read the article

  • How to implement Google Maps new version of API v2

    - by bapatla
    Hi every one I came to know that google maps has deprecated its previous version API v1 and introduced a new version of google maps API v2. I tried out one example by following some links in google any how i am pretty sure that i got the api key correctly by providing the exact hash key code and managed to get the correct api key. Now i managed to write some code as well but when i tried to execute the code i am getting the errors please help me to solve this here is my code and i even tried the sample codes provided by google play services an i got the same problem this is the sample that i have done by referring some links in google main activity package com.example.apv; import com.google.android.gms.maps.CameraUpdateFactory; import com.google.android.gms.maps.GoogleMap; import com.google.android.gms.maps.MapFragment; import com.google.android.gms.maps.model.BitmapDescriptorFactory; import com.google.android.gms.maps.model.LatLng; import com.google.android.gms.maps.model.MarkerOptions; import android.os.Bundle; import android.app.Activity; import android.app.FragmentManager; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); FragmentManager fragmentManager = getFragmentManager(); MapFragment mapFragment = (MapFragment) fragmentManager.findFragmentById(R.id.map); GoogleMap googleMap = mapFragment.getMap(); LatLng sfLatLng = new LatLng(37.7750, -122.4183); googleMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); googleMap.addMarker(new MarkerOptions() .position(sfLatLng) .title("San Francisco") .snippet("Population: 776733") .icon(BitmapDescriptorFactory.defaultMarker( BitmapDescriptorFactory.HUE_AZURE))); googleMap.getUiSettings().setCompassEnabled(true); googleMap.getUiSettings().setZoomControlsEnabled(true); googleMap.animateCamera(CameraUpdateFactory.newLatLngZoom(sfLatLng, 10)); } } main.xml <?xml version="1.0" encoding="utf-8"?> <fragment xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/map" android:layout_width="match_parent" android:layout_height="match_parent" class="com.google.android.gms.maps.MapFragment"/> and finally my manifest file <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.apv" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17"/> <permission android:name="com.codebybrian.mapsample.permission.MAPS_RECEIVE" android:protectionLevel="signature"/> <!--Required permissions--> permission oid:name="com.codebybrian.mapsample.permission.MAPS_RECEIVE"/> <!--Used by the API to download map tiles from Google Maps servers: --> <uses-permission android:name="android.permission.INTERNET"/> <!--Allows the API to access Google web-based services: --> <uses-permission android:name="com.google.android.providers.gsf.permission.READ_GSERVICES"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <!--Optional permissions--> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> <!--Version 2 of the Google Maps Android API requires OpenGL ES version 2 --> <uses-feature android:glEsVersion="0x00020000" android:required="true"/> application android:label="@string/app_name" android:icon="@drawable/ic_launcher"> <activity android:name=".MyMapActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <meta-data android:name="com.google.android.maps.v2.API_KEY" android:value="AZzaSSsBmhi4dXoKSylGGmjkQ5Jev9UdAJBjk"/> </application> </manifest> i run my application in emulator of version 4.2 and api level of 17 i got following error 12-17 10:06:52.590: E/Trace(826): error opening trace file: No such file or directory (2) 12-17 10:06:52.590: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.590: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.590: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.680: I/ActivityThread(826): Pub com.google.android.gms.plus;com.google.android.gms.plus.action: com.google.android.gms.plus.provider.PlusProvider 12-17 10:06:52.740: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.740: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.760: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 later i came to know that these version cant execute in emulator so i tried executing it with two devices one is Sony xperia u of android version 2.3.7 and Samsung galaxy tab of android version 4.1.1 and these are my outputs 12-17 14:37:02.468: D/AndroidRuntime(7636): Shutting down VM 12-17 14:37:02.468: W/dalvikvm(7636): threadid=1: thread exiting with uncaught exception (group=0x41f672a0) 12-17 14:37:02.476: E/AndroidRuntime(7636): FATAL EXCEPTION: main 12-17 14:37:02.476: E/AndroidRuntime(7636): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.example.apv/com.example.apv.MyMapActivity}: java.lang.ClassNotFoundException: com.example.apv.MyMapActivity 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2021) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2122) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.access$600(ActivityThread.java:140) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1228) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.os.Handler.dispatchMessage(Handler.java:99) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.os.Looper.loop(Looper.java:137) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.main(ActivityThread.java:4895) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.reflect.Method.invokeNative(Native Method) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.reflect.Method.invoke(Method.java:511) 12-17 14:37:02.476: E/AndroidRuntime(7636): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller .run(ZygoteInit.java:994) 12-17 14:37:02.476: E/AndroidRuntime(7636): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:761) 12-17 14:37:02.476: E/AndroidRuntime(7636): at dalvik.system.NativeStart.main(Native Method) 12-17 14:37:02.476: E/AndroidRuntime(7636): Caused by: java.lang.ClassNotFoundException: com.example.apv.MyMapActivity 12-17 14:37:02.476: E/AndroidRuntime(7636): at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:61) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.ClassLoader.loadClass(ClassLoader.java:501) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.ClassLoader.loadClass(ClassLoader.java:461) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.Instrumentation.newActivity(Instrumentation.java:1068) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2012) 12-17 14:37:02.476: E/AndroidRuntime(7636): ... 11 more could any one please suggest me to how to get this done and give me some links of new version API v2 tutorials of google maps and some examples links please help me

    Read the article

  • Android Uncaught Exception

    - by Agrim Asthana
    09-30 02:32:31.474: D/ddm-heap(214): Got feature list request 09-30 02:32:31.634: D/AndroidRuntime(214): Shutting down VM 09-30 02:32:31.634: W/dalvikvm(214): threadid=3: thread exiting with uncaught exception (group=0x4001b188) 09-30 02:32:31.634: E/AndroidRuntime(214): Uncaught handler: thread main exiting due to uncaught exception 09-30 02:32:31.654: E/AndroidRuntime(214): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.mjcet.mjcet/com.mjcet.mjcet.MJCET}: java.lang.ClassNotFoundException: com.mjcet.mjcet.MJCET in loader dalvik.system.PathClassLoader@44e8c820 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2417) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.os.Handler.dispatchMessage(Handler.java:99) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.os.Looper.loop(Looper.java:123) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.ActivityThread.main(ActivityThread.java:4363) 09-30 02:32:31.654: E/AndroidRuntime(214): at java.lang.reflect.Method.invokeNative(Native Method) 09-30 02:32:31.654: E/AndroidRuntime(214): at java.lang.reflect.Method.invoke(Method.java:521) 09-30 02:32:31.654: E/AndroidRuntime(214): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 09-30 02:32:31.654: E/AndroidRuntime(214): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 09-30 02:32:31.654: E/AndroidRuntime(214): at dalvik.system.NativeStart.main(Native Method) 09-30 02:32:31.654: E/AndroidRuntime(214): Caused by: java.lang.ClassNotFoundException: com.mjcet.mjcet.MJCET in loader dalvik.system.PathClassLoader@44e8c820 09-30 02:32:31.654: E/AndroidRuntime(214): at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:243) 09-30 02:32:31.654: E/AndroidRuntime(214): at java.lang.ClassLoader.loadClass(ClassLoader.java:573) 09-30 02:32:31.654: E/AndroidRuntime(214): at java.lang.ClassLoader.loadClass(ClassLoader.java:532) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 09-30 02:32:31.654: E/AndroidRuntime(214): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2409) 09-30 02:32:31.654: E/AndroidRuntime(214): ... 11 more 09-30 02:32:31.684: I/dalvikvm(214): threadid=7: reacting to signal 3 09-30 02:32:31.684: E/dalvikvm(214): Unable to open stack trace file '/data/anr/traces.txt': Permission denied the above is my debugging output for the below activities LOGINACTIVITY.java package com.agrim.mjcet; import com.agrim.mjcet.R; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; //import android.widget.TextView; import android.widget.Button; import android.widget.EditText; import android.view.View.OnClickListener; public class LoginActivity extends Activity { EditText txtUserName; EditText txtPassword; Button btnLogin; Button btnCancel; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // setting default screen to login.xml setContentView(R.layout.main); txtUserName=(EditText)this.findViewById(R.id.txtUname); txtPassword=(EditText)this.findViewById(R.id.txtPwd); btnLogin=(Button)this.findViewById(R.id.btnLogin); btnLogin=(Button)this.findViewById(R.id.btnLogin); btnLogin.setOnClickListener(new OnClickListener() { public void onClick(View v) { // Switching to Register screen if((txtUserName.getText().toString()).equals(txtPassword.getText().toString())) { Intent myIntent = new Intent(getApplicationContext(),SampleActivity.class); startActivityForResult(myIntent, 0); } } } ); } } SAMPLE ACTIVITY.java package com.agrim.mjcet; import com.agrim.mjcet.R; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.TextView; public class SampleActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Set View to register.xml setContentView(R.layout.lol); TextView HomeScreen = (TextView) findViewById(R.id.back); HomeScreen.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { // Closing registration screen // Switching to Login Screen/closing register screen finish(); } }); } } and here are my 2 layouts MAIN.XML <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#000000" android:stretchColumns="1"> <TableRow> <TextView android:text="@string/user_name" android:textColor="#347235" android:id="@+id/TextView01" android:layout_width="wrap_content" android:layout_marginLeft="25dip" android:layout_height="wrap_content"> </TextView> <EditText android:text="" android:inputType="text" android:id="@+id/txtUname" android:layout_weight="0.75" android:layout_width="wrap_content" android:layout_marginRight="10dip" android:layout_marginBottom="5dip" android:background="#FFFFFF" android:layout_height="wrap_content"> </EditText> </TableRow> <TableRow> <TextView android:text="@string/password" android:textColor="#347235" android:id="@+id/TextView02" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="25dip"> </TextView> <EditText android:text="" android:inputType="textPassword" android:id="@+id/txtPwd" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginRight="10dip" android:layout_weight="1" android:background="#FFFFFF" android:gravity="center"> </EditText> </TableRow> <TableRow> <Button android:id="@+id/btnLogin" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="#FFFFFF" android:layout_marginTop="25dip" android:layout_marginLeft="50dip" android:onClick="onClickMyButton" android:text="@string/login" /> <Button android:id="@+id/btnCancel" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="#FFFFFF" android:layout_marginTop="25dip" android:layout_marginLeft="100dip" android:layout_marginRight="50dip" android:text="@string/cancel" /> </TableRow> <FrameLayout android:layout_width="wrap_content" android:layout_height="match_parent" > <ImageView android:layout_width="fill_parent" android:layout_height="match_parent" android:scaleType="centerInside" android:src="@drawable/logo_invert" android:contentDescription="@drawable/logo_invert"/> </FrameLayout> </TableLayout> and finally LOL.xml <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:background="#000000" android:layout_height="fill_parent" > <ImageView android:layout_width="fill_parent" android:id="@+id/back" android:layout_height="match_parent" android:scaleType="centerInside" android:src="@drawable/lol" android:contentDescription="@drawable/lol"> </ImageView> <RelativeLayout android:layout_width="match_parent" android:layout_height="138dp" android:orientation="vertical" android:gravity="bottom" > <TextView android:id="@+id/TextView02" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:text="@string/coming_soon" android:textColor="#347235" /> </RelativeLayout> </FrameLayout> I get a force close upon initialization.. and yes this is my first android app :)

    Read the article

  • Ant build classpath jar generates "error in opening zip file"

    - by Uberpuppy
    I have a project built in eclipse with a dependencies on 3rd party jars. I'm trying to generate a suitable build file for ant - using eclipses built-in export-ant buildfile feature as a starting block. When I run the build target I get the following error: [javac] error: error reading /base/repo/FabTrace/lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar; error in opening zip file And the whole build file (auto-generated by eclipse) looks like this: (NB: the error above always references the first jar listed in the classpath) <project basedir="." default="build" name="FabTrace"> <property environment="env"/> <property name="ECLIPSE_HOME" value="/opt/apps/eclipse"/> <property name="debuglevel" value="source,lines,vars"/> <property name="target" value="1.5"/> <property name="source" value="1.5"/> <path id="JUnit 4.libraryclasspath"> <pathelement location="${ECLIPSE_HOME}/plugins/org.junit4_4.5.0.v20090824/junit.jar"/> <pathelement location="${ECLIPSE_HOME}/plugins/org.hamcrest.core_1.1.0.v20090501071000.jar"/> </path> <path id="FabTrace.classpath"> <pathelement location="bin"/> <pathelement location="lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar"/> <pathelement location="lib/apache/geronimo/specs/geronimo-jms_1.1_spec/1.0/geronimo-jms_1.1_spec-1.0.jar"/> <pathelement location="lib/commons-collections/commons-collections/3.2/commons-collections-3.2.jar"/> <pathelement location="lib/commons-io/commons-io/1.4/commons-io-1.4.jar"/> <pathelement location="lib/commons-lang/commons-lang/2.1/commons-lang-2.1.jar"/> <pathelement location="lib/commons-logging/commons-logging/1.1/commons-logging-1.1.jar"/> <pathelement location="lib/commons-logging/commons-logging-api/1.1/commons-logging-api-1.1.jar"/> <pathelement location="lib/javax/activation/activation/1.1/activation-1.1.jar"/> <pathelement location="lib/javax/jms/jms/1.1/jms-1.1.jar"/> <pathelement location="lib/javax/mail/mail/1.4/mail-1.4.jar"/> <pathelement location="lib/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar"/> <pathelement location="lib/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar"/> <pathelement location="lib/junit/junit/4.4/junit-4.4.jar"/> <pathelement location="lib/log4j/log4j/1.2.15/log4j-1.2.15.jar"/> <pathelement location="lib/apache/camel/camel-jms-2.0-M1.jar"/> <pathelement location="lib/spring/spring-2.5.6.jar"/> <pathelement location="lib/apache/camel/camel-bundle-2.0-M1.jar"/> <pathelement location="lib/backport-util-concurrent/backport-util-concurrent-3.1.jar"/> <pathelement location="lib/commons-pool/commons-pool-1.4.jar"/> <pathelement location="lib/apache/camel/camel-activemq-1.1.0.jar"/> <pathelement location="lib/apache/activemq/activemq-camel-5.2.0.jar"/> <pathelement location="lib/jencks/jencks-2.2-all.jar"/> <pathelement location="lib/jencks/jencks-amqpool-2.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/activemq-all-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/xbean-spring-3.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/activemq-core-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/camel-jetty-2.2.0.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-util-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-xbean-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/activemq-optional-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/geronimo-servlet_2.5_spec-1.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-beans-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-context-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-core-2.5.6.jar"/> <path refid="JUnit 4.libraryclasspath"/> </path> <target name="init"> <mkdir dir="bin"/> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/main/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/test/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="config"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> </target> <target name="clean"> <delete dir="bin"/> </target> <target depends="clean" name="cleanall"/> <target depends="build-subprojects,build-project" name="build"/> <target name="build-subprojects"/> <target depends="init" name="build-project"> <echo message="${ant.project.name}: ${ant.file}"/> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/main/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/test/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="config"/> <classpath refid="FabTrace.classpath"/> </javac> </target> </project> (I know there's eclipse specific stuff in here. But I get the same results with or without it.) I've done ye old google search and trawled around without success. I can confirm that all the jars do really exist. I've also tried from the commandline and as sudo - again, same results. Any help would be greatly appreciated. Cheers

    Read the article

  • Save object states in .data or attr - Performance vs CSS?

    - by Neysor
    In response to my answer yesterday about rotating an Image, Jamund told me to use .data() instead of .attr() First I thought that he is right, but then I thought about a bigger context... Is it always better to use .data() instead of .attr()? I looked in some other posts like what-is-better-data-or-attr or jquery-data-vs-attrdata The answers were not satisfactory for me... So I moved on and edited the example by adding CSS. I thought it might be useful to make a different Style on each image if it rotates. My style was the following: .rp[data-rotate="0"] { border:10px solid #FF0000; } .rp[data-rotate="90"] { border:10px solid #00FF00; } .rp[data-rotate="180"] { border:10px solid #0000FF; } .rp[data-rotate="270"] { border:10px solid #00FF00; } Because design and coding are often separated, it could be a nice feature to handle this in CSS instead of adding this functionality into JavaScript. Also in my case the data-rotate is like a special state which the image currently has. So in my opinion it make sense to represent it within the DOM. I also thought this could be a case where it is much better to save with .attr() then with .data(). Never mentioned before in one of the posts I read. But then i thought about performance. Which function is faster? I built my own test following: <!DOCTYPE HTML> <html> <head> <title>test</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script> <script type="text/javascript"> function runfirst(dobj,dname){ console.log("runfirst "+dname); console.time(dname+"-attr"); for(i=0;i<10000;i++){ dobj.attr("data-test","a"+i); } console.timeEnd(dname+"-attr"); console.time(dname+"-data"); for(i=0;i<10000;i++){ dobj.data("data-test","a"+i); } console.timeEnd(dname+"-data"); } function runlast(dobj,dname){ console.log("runlast "+dname); console.time(dname+"-data"); for(i=0;i<10000;i++){ dobj.data("data-test","a"+i); } console.timeEnd(dname+"-data"); console.time(dname+"-attr"); for(i=0;i<10000;i++){ dobj.attr("data-test","a"+i); } console.timeEnd(dname+"-attr"); } $().ready(function() { runfirst($("#rp4"),"#rp4"); runfirst($("#rp3"),"#rp3"); runlast($("#rp2"),"#rp2"); runlast($("#rp1"),"#rp1"); }); </script> </head> <body> <div id="rp1">Testdiv 1</div> <div id="rp2" data-test="1">Testdiv 2</div> <div id="rp3">Testdiv 3</div> <div id="rp4" data-test="1">Testdiv 4</div> </body> </html> It should also show if there is a difference with a predefined data-test or not. One result was this: runfirst #rp4 #rp4-attr: 515ms #rp4-data: 268ms runfirst #rp3 #rp3-attr: 505ms #rp3-data: 264ms runlast #rp2 #rp2-data: 260ms #rp2-attr: 521ms runlast #rp1 #rp1-data: 284ms #rp1-attr: 525ms So the .attr() function did always need more time than the .data() function. This is an argument for .data() I thought. Because performance is always an argument! Then I wanted to post my results here with some questions, and in the act of writing I compared with the questions Stack Overflow showed me (similar titles) And true enough, there was one interesting post about performance I read it and run their example. And now I am confused! This test showed that .data() is slower then .attr() !?!! Why is that so? First I thought it is because of a different jQuery library so I edited it and saved the new one. But the result wasn't changing... So now my questions to you: Why are there some differences in the performance in these two examples? Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not? Now depending on the performance: Would performance be an argument for you using .attr() instead of data, if it shows that .attr() is better? Although data is meant to be used for .data()? UPDATE 1: I did see that without overhead .data() is much faster. Misinterpreted the data :) But I'm more interested in my second question. :) Would you prefer to use data- HTML5 attributes instead of data, if it represents a state? Although it wouldn't be needed at the time of coding? Why - Why not? Are there some other reasons you can think of, to use .attr() and not .data()? e.g. interoperability? because .data() is jquery style and HTML Attributes can be read by all... UPDATE 2: As we see from T.J Crowder's speed test in his answer attr is much faster then data! which is again confusing me :) But please! Performance is an argument, but not the highest! So give answers to my other questions please too!

    Read the article

  • Referencing CDI producer method result in h:selectOneMenu

    - by user953217
    I have a named session scoped bean CustomerRegistration which has a named producer method getNewCustomer which returns a Customer object. There is also CustomerListProducer class which produces all customers as list from the database. On the selectCustomer.xhtml page the user is then able to select one of the customers and submit the selection to the application which then simply prints out the last name of the selected customer. Now this only works when I reference the selected customer on the facelets page via #{customerRegistration.newCustomer}. When I simply use #{newCustomer} then the output for the last name is null whenever I submit the form. What's going on here? Is this the expected behavior as according to chapter 7.1 Restriction upon bean instantion of JSR-299 spec? It says: ... However, if the application directly instantiates a bean class, instead of letting the container perform instantiation, the resulting instance is not managed by the container and is not a contextual instance as defined by Section 6.5.2, “Contextual instance of a bean”. Furthermore, the capabilities listed in Section 2.1, “Functionality provided by the container to the bean” will not be available to that particular instance. In a deployed application, it is the container that is responsible for instantiating beans and initializing their dependencies. ... Here's the code: Customer.java: @javax.persistence.Entity @Veto public class Customer implements Serializable, Entity { private static final long serialVersionUID = 122193054725297662L; @Column(name = "first_name") private String firstName; @Column(name = "last_name") private String lastName; @Id @GeneratedValue() private Long id; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } @Override public String toString() { return firstName + ", " + lastName; } @Override public Long getId() { return this.id; } } CustomerListProducer.java: @SessionScoped public class CustomerListProducer implements Serializable { @Inject private EntityManager em; private List<Customer> customers; @Inject @Category("helloworld_as7") Logger log; // @Named provides access the return value via the EL variable name // "members" in the UI (e.g., // Facelets or JSP view) @Produces @Named public List<Customer> getCustomers() { return customers; } public void onCustomerListChanged( @Observes(notifyObserver = Reception.IF_EXISTS) final Customer customer) { // retrieveAllCustomersOrderedByName(); log.info(customer.toString()); } @PostConstruct public void retrieveAllCustomersOrderedByName() { CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<Customer> criteria = cb.createQuery(Customer.class); Root<Customer> customer = criteria.from(Customer.class); // Swap criteria statements if you would like to try out type-safe // criteria queries, a new // feature in JPA 2.0 // criteria.select(member).orderBy(cb.asc(member.get(Member_.name))); criteria.select(customer).orderBy(cb.asc(customer.get("lastName"))); customers = em.createQuery(criteria).getResultList(); } } CustomerRegistration.java: @Named @SessionScoped public class CustomerRegistration implements Serializable { @Inject @Category("helloworld_as7") private Logger log; private Customer newCustomer; @Produces @Named public Customer getNewCustomer() { return newCustomer; } public void selected() { log.info("Customer " + newCustomer.getLastName() + " ausgewählt."); } @PostConstruct public void initNewCustomer() { newCustomer = new Customer(); } public void setNewCustomer(Customer newCustomer) { this.newCustomer = newCustomer; } } not working selectCustomer.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets"> <h:head> <title>Auswahl</title> </h:head> <h:body> <h:form> <h:selectOneMenu value="#{newCustomer}" converter="customerConverter"> <f:selectItems value="#{customers}" var="current" itemLabel="#{current.firstName}, #{current.lastName}" /> </h:selectOneMenu> <h:panelGroup id="auswahl"> <h:outputText value="#{newCustomer.lastName}" /> </h:panelGroup> <h:commandButton value="Klick" action="#{customerRegistration.selected}" /> </h:form> </h:body> </html> working selectCustomer.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets"> <h:head> <title>Auswahl</title> </h:head> <h:body> <h:form> <h:selectOneMenu value="#{customerRegistration.newCustomer}" converter="customerConverter"> <f:selectItems value="#{customers}" var="current" itemLabel="#{current.firstName}, #{current.lastName}" /> </h:selectOneMenu> <h:panelGroup id="auswahl"> <h:outputText value="#{newCustomer.lastName}" /> </h:panelGroup> <h:commandButton value="Klick" action="#{customerRegistration.selected}" /> </h:form> </h:body> </html> CustomerConverter.java: @SessionScoped @FacesConverter("customerConverter") public class CustomerConverter implements Converter, Serializable { private static final long serialVersionUID = -6093400626095413322L; @Inject EntityManager entityManager; @Override public Object getAsObject(FacesContext context, UIComponent component, String value) { Long id = Long.valueOf(value); return entityManager.find(Customer.class, id); } @Override public String getAsString(FacesContext context, UIComponent component, Object value) { return ((Customer) value).getId().toString(); } }

    Read the article

  • Use a single freemarker template to display tables of arbitrary pojos

    - by Kevin Pauli
    Attention advanced Freemarker gurus: I want to use a single freemarker template to be able to output tables of arbitrary pojos, with the columns to display defined separately than the data. The problem is that I can't figure out how to get a handle to a function on a pojo at runtime, and then have freemarker invoke that function (lambda style). From skimming the docs it seems that Freemarker supports functional programming, but I can't seem to forumulate the proper incantation. I whipped up a simplistic concrete example. Let's say I have two lists: a list of people with a firstName and lastName, and a list of cars with a make and model. would like to output these two tables: <table> <tr> <th>firstName</th> <th>lastName</th> </tr> <tr> <td>Joe</td> <td>Blow</d> </tr> <tr> <td>Mary</td> <td>Jane</d> </tr> </table> and <table> <tr> <th>make</th> <th>model</th> </tr> <tr> <td>Toyota</td> <td>Tundra</d> </tr> <tr> <td>Honda</td> <td>Odyssey</d> </tr> </table> But I want to use the same template, since this is part of a framework that has to deal with dozens of different pojo types. Given the following code: public class FreemarkerTest { public static class Table { private final List<Column> columns = new ArrayList<Column>(); public Table(Column[] columns) { this.columns.addAll(Arrays.asList(columns)); } public List<Column> getColumns() { return columns; } } public static class Column { private final String name; public Column(String name) { this.name = name; } public String getName() { return name; } } public static class Person { private final String firstName; private final String lastName; public Person(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } } public static class Car { String make; String model; public Car(String make, String model) { this.make = make; this.model = model; } public String getMake() { return make; } public String getModel() { return model; } } public static void main(String[] args) throws Exception { final Table personTableDefinition = new Table(new Column[] { new Column("firstName"), new Column("lastName") }); final List<Person> people = Arrays.asList(new Person[] { new Person("Joe", "Blow"), new Person("Mary", "Jane") }); final Table carTable = new Table(new Column[] { new Column("make"), new Column("model") }); final List<Car> cars = Arrays.asList(new Car[] { new Car("Toyota", "Tundra"), new Car("Honda", "Odyssey") }); final Configuration cfg = new Configuration(); cfg.setClassForTemplateLoading(FreemarkerTest.class, ""); cfg.setObjectWrapper(new DefaultObjectWrapper()); final Template template = cfg.getTemplate("test.ftl"); process(template, personTableDefinition, people); process(template, carTable, cars); } private static void process(Template template, Table tableDefinition, List<? extends Object> data) throws Exception { final Map<String, Object> dataMap = new HashMap<String, Object>(); dataMap.put("tableDefinition", tableDefinition); dataMap.put("data", data); final Writer out = new OutputStreamWriter(System.out); template.process(dataMap, out); out.flush(); } } All the above is a given for this problem. So here is the template I have been hacking on. Note the comment where I am having trouble. <table> <tr> <#list tableDefinition.columns as col> <th>${col.name}</th> </#list> </tr> <#list data as pojo> <tr> <#list tableDefinition.columns as col> <td><#-- what goes here? --></td> </#list> </tr> </#list> </table> So col.name has the name of the property I want to access from the pojo. I have tried a few things, such as pojo.col.name and <#assign property = col.name/> ${pojo.property} but of course these don't work, I just included them to help convey my intent. I am looking for a way to get a handle to a function and have freemarker invoke it, or perhaps some kind of "evaluate" feature that can take an arbitrary expression as a string and evaluate it at runtime.

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • SINGLE SIGN ON SECURITY THREAT! FACEBOOK access_token broadcast in the open/clear

    - by MOKANA
    Subsequent to my posting there was a remark made that this was not really a question but I thought I did indeed postulate one. So that there is no ambiquity here is the question with a lead in: Since there is no data sent from Facebook during the Canvas Load process that is not at some point divulged, including the access_token, session and other data that could uniquely identify a user, does any one see any other way other than adding one more layer, i.e., a password, sent over the wire via HTTPS along with the access_toekn, that will insure unique untampered with security by the user? Using Wireshark I captured the local broadcast while loading my Canvas Application page. I was hugely surprised to see the access_token broadcast in the open, viewable for any one to see. This access_token is appended to any https call to the Facebook OpenGraph API. Using facebook as a single click log on has now raised huge concerns for me. It is stored in a session object in memory and the cookie is cleared upon app termination and after reviewing the FB.Init calls I saw a lot of HTTPS calls so I assumed the access_token was always encrypted. But last night I saw in the status bar a call from what was simply an http call that included the App ID so I felt I should sniff the Application Canvas load sequence. Today I did sniff the broadcast and in the attached image you can see that there are http calls with the access_token being broadcast in the open and clear for anyone to gain access to. Am I missing something, is what I am seeing and my interpretation really correct. If any one can sniff and get the access_token they can theorically make calls to the Graph API via https, even though the call back would still need to be the site established in Facebook's application set up. But what is truly a security threat is anyone using the access_token for access to their own site. I do not see the value of a single sign on via Facebook if the only thing that was established as secure was the access_token - becuase for what I can see it clearly is not secure. Access tokens that never have an expire date do not change. Access_tokens are different for every user, to access to another site could be held tight to just a single user, but compromising even a single user's data is unacceptable. http://www.creatingstory.com/images/InTheOpen.png Went back and did more research on this: FINDINGS: Went back an re ran the canvas application to verify that it was not any of my code that was not broadcasting. In this call: HTTP GET /connect.php/en_US/js/CacheData HTTP/1.1 The USER ID is clearly visible in the cookie. So USER_ID's are fully visible, but they are already. Anyone can go to pretty much any ones page and hover over the image and see the USER ID. So no big threat. APP_ID are also easily obtainable - but . . . http://www.creatingstory.com/images/InTheOpen2.png The above file clearly shows the FULL ACCESS TOKEN clearly in the OPEN via a Facebook initiated call. Am I wrong. TELL ME I AM WRONG because I want to be wrong about this. I have since reset my app secret so I am showing the real sniff of the Canvas Page being loaded. Additional data 02/20/2011: @ifaour - I appreciate the time you took to compile your response. I am pretty familiar with the OAuth process and have a pretty solid understanding of the signed_request unpacking and utilization of the access_token. I perform a substantial amount of my processing on the server and my Facebook server side flows are all complete and function without any flaw that I know of. The application secret is secure and never passed to the front end application and is also changed regularly. I am being as fanatical about security as I can be, knowing there is so much I don’t know that could come back and bite me. Two huge access_token issues: The issues concern the possible utilization of the access_token from the USER AGENT (browser). During the FB.INIT() process of the Facebook JavaScript SDK, a cookie is created as well as an object in memory called a session object. This object, along with the cookie contain the access_token, session, a secret, and uid and status of the connection. The session object is structured such that is supports both the new OAuth and the legacy flows. With OAuth, the access_token and status are pretty much al that is used in the session object. The first issue is that the access_token is used to make HTTPS calls to the GRAPH API. If you had the access_token, you could do this from any browser: https://graph.facebook.com/220439?access_token=... and it will return a ton of information about the user. So any one with the access token can gain access to a Facebook account. You can also make additional calls to any info the user has granted access to the application tied to the access_token. At first I thought that a call into the GRAPH had to have a Callback to the URL established in the App Setup, but I tested it as mentioned below and it will return info back right into the browser. Adding that callback feature would be a good idea I think, tightens things up a bit. The second issue is utilization of some unique private secured data that identifies the user to the third party data base, i.e., like in my case, I would use a single sign on to populate user information into my database using this unique secured data item (i.e., access_token which contains the APP ID, the USER ID, and a hashed with secret sequence). None of this is a problem on the server side. You get a signed_request, you unpack it with secret, make HTTPS calls, get HTTPS responses back. When a user has information entered via the USER AGENT(browser) that must be stored via a POST, this unique secured data element would be sent via HTTPS such that they are validated prior to data base insertion. However, If there is NO secured piece of unique data that is supplied via the single sign on process, then there is no way to guarantee unauthorized access. The access_token is the one piece of data that is utilized by Facebook to make the HTTPS calls into the GRAPH API. it is considered unique in regards to BOTH the USER and the APPLICATION and is initially secure via the signed_request packaging. If however, it is subsequently transmitted in the clear and if I can sniff the wire and obtain the access_token, then I can pretend to be the application and gain the information they have authorized the application to see. I tried the above example from a Safari and IE browser and it returned all of my information to me in the browser. In conclusion, the access_token is part of the signed_request and that is how the application initially obtains it. After OAuth authentication and authorization, i.e., the USER has logged into Facebook and then runs your app, the access_token is stored as mentioned above and I have sniffed it such that I see it stored in a Cookie that is transmitted over the wire, resulting in there being NO UNIQUE SECURED IDENTIFIABLE piece of information that can be used to support interaction with the database, or in other words, unless there were one more piece of secure data sent along with the access_token to my database, i.e., a password, I would not be able to discern if it is a legitimate call. Luckily I utilized secure AJAX via POST and the call has to come from the same domain, but I am sure there is a way to hijack that. I am totally open to any ideas on this topic on how to uniquely identify my USERS other than adding another layer (password) via this single sign on process or if someone would just share with me that I read and analyzed my data incorrectly and that the access_token is always secure over the wire. Mahalo nui loa in advance.

    Read the article

  • How to solve "403 Forbidden" on CentOS6 with SELinux Disabled?

    - by André
    I have a machine on Linode that is driving me crazy. Linode does not have SELinux on CentOS6... I'm trying to configure to put my website in "/home/websites/public_html/mysite.com/public" As I don´t have SELinux enable, how can I avoid the "403 Forbidden" that I get when trying to access the webpage? Sorry for my english. Best Regards, Update1, ERROR_LOG [Mon Oct 17 14:04:16 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:08:07 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:41 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:32:35 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:34:45 2011] [error] [client 58.218.199.227] (13)Permission denied: access to /proxy-1.php denied [Mon Oct 17 15:32:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:26 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:43 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:38:32 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:42:56 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:43:12 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:45:34 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:51:25 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Upadate2, /home/websites directory drwx------ 3 websites websites 4096 Oct 17 14:52 . drwxr-xr-x. 3 root root 4096 Oct 17 13:42 .. -rw------- 1 websites websites 372 Oct 17 14:52 .bash_history -rw-r--r-- 1 websites websites 18 May 30 11:46 .bash_logout -rw-r--r-- 1 websites websites 176 May 30 11:46 .bash_profile -rw-r--r-- 1 websites websites 124 May 30 11:46 .bashrc drwxrwxr-x 3 websites apache 4096 Oct 17 13:45 public_html Update3, httpd.conf ### Section 1: Global Environment ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> #Listen 12.34.56.78:80 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf #ExtendedStatus On User apache Group apache ServerAdmin root@localhost #ServerName www.example.com:80 UseCanonicalName Off DocumentRoot "/var/www/html" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/home/websites/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. # # The path to the end user account 'public_html' directory must be # accessible to the webserver userid. This usually means that ~userid # must have permissions of 711, ~userid/public_html must have permissions # of 755, and documents contained therein must be world-readable. # Otherwise, the client will only receive a "403 Forbidden" message. # # See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden # <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # #UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # #<Directory /home/*/public_html> # AllowOverride FileInfo AuthConfig Limit # Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec # <Limit GET POST OPTIONS> # Order allow,deny # Allow from all # </Limit> # <LimitExcept GET POST OPTIONS> # Order deny,allow # Deny from all # </LimitExcept> #</Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # # The index.html.var file (a type-map) is used to deliver content- # negotiated documents. The MultiViews Option can be used for the # same purpose, but it is much slower. # DirectoryIndex index.html index.html.var # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> # # TypesConfig describes where the mime.types file (or equivalent) is # to be found. # TypesConfig /etc/mime.types # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off #EnableMMAP off #EnableSendfile off # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log LogLevel warn # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this # requires the mod_logio module to be loaded. #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # #CustomLog logs/access_log common # # If you would like to have separate agent and referer logfiles, uncomment # the following directives. # #CustomLog logs/referer_log referer #CustomLog logs/agent_log agent # # For a single logfile with access, agent, and referer information # (Combined Logfile Format), use the following directive: # CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # # WebDAV module configuration section. # <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ # # DefaultIcon is which icon to show for files which do not have an icon # explicitly set. # DefaultIcon /icons/unknown.gif # # AddDescription allows you to place a short description after a file in # server-generated indexes. These are only displayed for FancyIndexed # directories. # Format: AddDescription "description" filename # #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz # # ReadmeName is the name of the README file the server will look for by # default, and append to directory listings. # # HeaderName is the name of a file which should be prepended to # directory indexes. ReadmeName README.html HeaderName HEADER.html # # IndexIgnore is a set of filenames which directory indexing should ignore # and not include in the listing. Shell-style wildcarding is permitted. # IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t # # DefaultLanguage and AddLanguage allows you to specify the language of # a document. You can then use content negotiation to give a browser a # file in a language the user can understand. # # Specify a default language. This means that all data # going out without a specific language tag (see below) will # be marked with this one. You probably do NOT want to set # this unless you are sure it is correct for all cases. # # * It is generally better to not mark a page as # * being a certain language than marking it with the wrong # * language! # # DefaultLanguage nl # # Note 1: The suffix does not have to be the same as the language # keyword --- those with documents in Polish (whose net-standard # language code is pl) may wish to use "AddLanguage pl .po" to # avoid the ambiguity with the common suffix for perl scripts. # # Note 2: The example entries below illustrate that in some cases # the two character 'Language' abbreviation is not identical to # the two character 'Country' code for its country, # E.g. 'Danmark/dk' versus 'Danish/da'. # # Note 3: In the case of 'ltz' we violate the RFC by using a three char # specifier. There is 'work in progress' to fix this and get # the reference data for rfc1766 cleaned up. # # Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) # English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) # Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) # Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) # Norwegian (no) - Polish (pl) - Portugese (pt) # Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) # Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) # AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw # # LanguagePriority allows you to give precedence to some languages # in case of a tie during content negotiation. # # Just list the languages in decreasing order of preference. We have # more or less alphabetized them here. You probably want to change this. # LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW # # ForceLanguagePriority allows you to serve a result page rather than # MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) # [in case no accepted languages matched the available variants] # ForceLanguagePriority Prefer Fallback # # Specify a default charset for all content served; this enables # interpretation of all content as UTF-8 by default. To use the # default browser choice (ISO-8859-1), or to allow the META tags # in HTML content to override this choice, comment out this # directive: # AddDefaultCharset UTF-8 # # AddType allows you to add to or override the MIME configuration # file mime.types for specific file types. # #AddType application/x-tar .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # Despite the name similarity, the following Add* directives have nothing # to do with the FancyIndexing customization directives above. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # MIME-types for downloading Certificates and CRLs # AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # # For files that include their own HTTP headers: # #AddHandler send-as-is asis # # For type maps (negotiated resources): # (This is enabled by default to allow the Apache "It Worked" page # to be distributed in multiple languages.) # AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml # # Action lets you define media types that will execute a script whenever # a matching file is called. This eliminates the need for repeated URL # pathnames for oft-used CGI file processors. # Format: Action media/type /cgi-script/location # Format: Action handler-name /cgi-script/location # # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /var/www/error/include/ files and # copying them to /your/include/path/, even on a per-VirtualHost basis. # Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var </IfModule> </IfModule> # # The following directives modify normal HTTP response behavior to # handle known problems with browser implementations. # BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 # # The following directive disables redirects on non-GET requests for # a directory that does not include the trailing slash. This fixes a # problem with Microsoft WebFolders which does not appropriately handle # redirects for folders with DAV methods. # Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. # BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Allow remote server configuration reports, with the URL of # http://servername/server-info (requires that mod_info.c be loaded). # Change the ".example.com" to match your domain to enable. # #<Location /server-info> # SetHandler server-info # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: # #<IfModule mod_proxy.c> #ProxyRequests On # #<Proxy *> # Order deny,allow # Deny from all # Allow from .example.com #</Proxy> # # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block # #ProxyVia On # # To enable a cache of proxied content, uncomment the following lines. # See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details. # #<IfModule mod_disk_cache.c> # CacheEnable disk / # CacheRoot "/var/cache/mod_proxy" #</IfModule> # #</IfModule> # End of proxy directives. ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> # domain: mysite.com # public: /home/websites/public_html/mysite.com/ <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html DocumentRoot /home/websites/public_html/mysite.com/public # Custom log file locations LogLevel warn ErrorLog /home/websites/public_html/mysite.com/log/error.log CustomLog /home/websites/public_html/mysite.com/log/access.log combined </VirtualHost>

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • Ubuntu 10.04 not detecting multiple monitors

    - by user28837
    I have 2 graphics cards, the output from the lspci: 01:00.0 VGA compatible controller: ATI Technologies Inc RV770 [Radeon HD 4850] 02:00.0 VGA compatible controller: ATI Technologies Inc RV710 [Radeon HD 4350] I have one monitor connected to the 4850 and 2 connected to the 4350. However when I go into System Preferences Monitors the only monitor shown is the one connected to the 4850. Is there something I need to enable for it to be able to use the other card? How do I get this to work. Thanks. As per request: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-25-server i686 Ubuntu Current Operating System: Linux jeff-desktop 2.6.32-22-generic-pae #33-Ubuntu SMP Wed Apr 28 14:57:29 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-22-generic-pae root=UUID=852e1013-4ed6-40fd-a462-c29087888383 ro quiet splash Build Date: 23 April 2010 05:11:50PM xorg-server 2:1.7.6-2ubuntu7 (Bryce Harrington <[email protected]>) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue May 11 08:24:52 2010 (==) Using config file: "/etc/X11/xorg.conf" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (==) No Layout section. Using the first Screen section. (**) |-->Screen "Default Screen" (0) (**) | |-->Monitor "<default monitor>" (==) No device specified for screen "Default Screen". Using the first device section listed. (**) | |-->Device "Default Device" (==) No monitor specified for screen "Default Screen". Using a default monitor configuration. (==) Automatically adding devices (==) Automatically enabling devices (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. Entry deleted from font path. (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins (==) ModulePath set to "/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. (II) Loader magic: 0x81f0e80 (II) Module ABI versions: X.Org ANSI C Emulation: 0.4 X.Org Video Driver: 6.0 X.Org XInput driver : 7.0 X.Org Server Extension : 2.0 (++) using VT number 7 (--) PCI:*(0:1:0:0) 1002:9442:174b:e104 ATI Technologies Inc RV770 [Radeon HD 4850] rev 0, Mem @ 0xc0000000/268435456, 0xfe7e0000/65536, I/O @ 0x0000a000/256, BIOS @ 0x????????/131072 (--) PCI: (0:2:0:0) 1002:954f:1462:1618 ATI Technologies Inc RV710 [Radeon HD 4350] rev 0, Mem @ 0xd0000000/268435456, 0xfe8e0000/65536, I/O @ 0x0000b000/256, BIOS @ 0x????????/131072 (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory) (II) "extmod" will be loaded by default. (II) "dbe" will be loaded by default. (II) "glx" will be loaded. This was enabled by default and also specified in the config file. (II) "record" will be loaded by default. (II) "dri" will be loaded by default. (II) "dri2" will be loaded by default. (II) LoadModule: "glx" (II) Loading /usr/lib/xorg/extra-modules/modules/extensions/libglx.so (II) Module glx: vendor="FireGL - ATI Technologies Inc." compiled for 7.5.0, module version = 1.0.0 (II) Loading extension GLX (II) LoadModule: "extmod" (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so (II) Module extmod: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension MIT-SCREEN-SAVER (II) Loading extension XFree86-VidModeExtension (II) Loading extension XFree86-DGA (II) Loading extension DPMS (II) Loading extension XVideo (II) Loading extension XVideo-MotionCompensation (II) Loading extension X-Resource (II) LoadModule: "dbe" (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so (II) Module dbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DOUBLE-BUFFER (II) LoadModule: "record" (II) Loading /usr/lib/xorg/modules/extensions/librecord.so (II) Module record: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.13.0 Module class: X.Org Server Extension ABI class: X.Org Server Extension, version 2.0 (II) Loading extension RECORD (II) LoadModule: "dri" (II) Loading /usr/lib/xorg/modules/extensions/libdri.so (II) Module dri: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension XFree86-DRI (II) LoadModule: "dri2" (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so (II) Module dri2: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Server Extension, version 2.0 (II) Loading extension DRI2 (II) LoadModule: "fglrx" (II) Loading /usr/lib/xorg/extra-modules/modules/drivers/fglrx_drv.so (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 Module class: X.Org Video Driver (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Loading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so (II) Module fglrxdrm: vendor="FireGL - ATI Technologies Inc." compiled for 1.7.1, module version = 8.72.11 (II) ATI Proprietary Linux Driver Version Identifier:8.72.11 (II) ATI Proprietary Linux Driver Release Identifier: 8.723.1 (II) ATI Proprietary Linux Driver Build Date: Apr 8 2010 21:40:29 (II) Primary Device is: PCI 01@00:00:0 (WW) Falling back to old probe method for fglrx (II) Loading PCS database from /etc/ati/amdpcsdb (--) Assigning device section with no busID to primary device (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:0) found (--) Chipset Supported AMD Graphics Processor (0x9442) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found (WW) fglrx: No matching Device section for instance (BusID PCI:0@2:0:1) found (**) ChipID override: 0x954F (**) Chipset Supported AMD Graphics Processor (0x954F) found (II) AMD Video driver is running on a device belonging to a group targeted for this release (II) AMD Video driver is signed (II) fglrx(0): pEnt->device->identifier=0x9428aa0 (II) pEnt->device->identifier=(nil) (II) fglrx(0): === [atiddxPreInit] === begin (II) Loading sub module "vgahw" (II) LoadModule: "vgahw" (II) Loading /usr/lib/xorg/modules/libvgahw.so (II) Module vgahw: vendor="X.Org Foundation" compiled for 1.7.6, module version = 0.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): Creating default Display subsection in Screen section "Default Screen" for depth/fbbpp 24/32 (**) fglrx(0): Depth 24, (--) framebuffer bpp 32 (II) fglrx(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) (==) fglrx(0): Default visual is TrueColor (==) fglrx(0): RGB weight 888 (II) fglrx(0): Using 8 bits per RGB (==) fglrx(0): Buffer Tiling is ON (II) Loading sub module "fglrxdrm" (II) LoadModule: "fglrxdrm" (II) Reloading /usr/lib/xorg/extra-modules/modules/linux/libfglrxdrm.so ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 10, (OK) ukiOpenByBusid: ukiOpenMinor returns 10 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 11, (OK) ukiOpenByBusid: ukiOpenMinor returns 11 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 (--) fglrx(0): Chipset: "ATI Radeon HD 4800 Series" (Chipset = 0x9442) (--) fglrx(0): (PciSubVendor = 0x174b, PciSubDevice = 0xe104) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xc0000000 (--) fglrx(0): MMIO registers at 0xfe7e0000 (--) fglrx(0): I/O port at 0x0000a000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Primary V_BIOS segment is: 0xc000 (II) Loading sub module "vbe" (II) LoadModule: "vbe" (II) Loading /usr/lib/xorg/modules/libvbe.so (II) Module vbe: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.1.0 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): VESA BIOS detected (II) fglrx(0): VESA VBE Version 3.0 (II) fglrx(0): VESA VBE Total Mem: 16384 kB (II) fglrx(0): VESA VBE OEM: ATI ATOMBIOS (II) fglrx(0): VESA VBE OEM Software Rev: 11.13 (II) fglrx(0): VESA VBE OEM Vendor: (C) 1988-2005, ATI Technologies Inc. (II) fglrx(0): VESA VBE OEM Product: RV770 (II) fglrx(0): VESA VBE OEM Product Rev: 01.00 (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: GDDR3 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (--) fglrx(0): Chipset: "ATI Radeon HD 4300/4500 Series" (Chipset = 0x954f) (--) fglrx(0): (PciSubVendor = 0x1462, PciSubDevice = 0x1618) (==) fglrx(0): board vendor info: third party graphics adapter - NOT original ATI (--) fglrx(0): Linear framebuffer (phys) at 0xd0000000 (--) fglrx(0): MMIO registers at 0xfe8e0000 (--) fglrx(0): I/O port at 0x0000b000 (==) fglrx(0): ROM-BIOS at 0x000c0000 (II) fglrx(0): AC Adapter is used (II) fglrx(0): Invalid ATI BIOS from int10, the adapter is not VGA-enabled (II) fglrx(0): ATI Video BIOS revision 9 or later detected (--) fglrx(0): Video RAM: 524288 kByte, Type: DDR2 (II) fglrx(0): PCIE card detected (--) fglrx(0): Using per-process page tables (PPPT) as GART. (WW) fglrx(0): board is an unknown third party board, chipset is supported (II) fglrx(0): Using adapter: 1:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): Interrupt handler installed at IRQ 31. (II) fglrx(0): Using adapter: 2:0.0. (II) fglrx(0): [FB] MC range(MCFBBase = 0xf00000000, MCFBSize = 0x20000000) (II) fglrx(0): RandR 1.2 support is enabled! (II) fglrx(0): RandR 1.2 rotation support is enabled! (==) fglrx(0): Center Mode is disabled (II) Loading sub module "fb" (II) LoadModule: "fb" (II) Loading /usr/lib/xorg/modules/libfb.so (II) Module fb: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.0.0 ABI class: X.Org ANSI C Emulation, version 0.4 (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Finished Initialize PPLIB! (II) Loading sub module "ddc" (II) LoadModule: "ddc" (II) Module "ddc" already built-in (II) fglrx(0): Connected Display0: DFP on external TMDS [tmds2] (II) fglrx(0): Display0 EDID data --------------------------- (II) fglrx(0): Manufacturer: DEL Model: a038 Serial#: 810829397 (II) fglrx(0): Year: 2008 Week: 51 (II) fglrx(0): EDID Version: 1.3 (II) fglrx(0): Digital Display Input (II) fglrx(0): Max Image Size [cm]: horiz.: 53 vert.: 30 (II) fglrx(0): Gamma: 2.20 (II) fglrx(0): DPMS capabilities: StandBy Suspend Off (II) fglrx(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 (II) fglrx(0): Default color space is primary color space (II) fglrx(0): First detailed timing is preferred mode (II) fglrx(0): redX: 0.640 redY: 0.330 greenX: 0.300 greenY: 0.600 (II) fglrx(0): blueX: 0.150 blueY: 0.060 whiteX: 0.312 whiteY: 0.329 (II) fglrx(0): Supported established timings: (II) fglrx(0): 720x400@70Hz (II) fglrx(0): 640x480@60Hz (II) fglrx(0): 640x480@75Hz (II) fglrx(0): 800x600@60Hz (II) fglrx(0): 800x600@75Hz (II) fglrx(0): 1024x768@60Hz (II) fglrx(0): 1024x768@75Hz (II) fglrx(0): 1280x1024@75Hz (II) fglrx(0): Manufacturer's mask: 0 (II) fglrx(0): Supported standard timings: (II) fglrx(0): #0: hsize: 1152 vsize 864 refresh: 75 vid: 20337 (II) fglrx(0): #1: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 (II) fglrx(0): #2: hsize: 1920 vsize 1080 refresh: 60 vid: 49361 (II) fglrx(0): Supported detailed timing: (II) fglrx(0): clock: 148.5 MHz Image Size: 531 x 298 mm (II) fglrx(0): h_active: 1920 h_sync: 2008 h_sync_end 2052 h_blank_end 2200 h_border: 0 (II) fglrx(0): v_active: 1080 v_sync: 1084 v_sync_end 1089 v_blanking: 1125 v_border: 0 (II) fglrx(0): Serial No: Y183D8CF0TFU (II) fglrx(0): Monitor name: DELL S2409W (II) fglrx(0): Ranges: V min: 50 V max: 76 Hz, H min: 30 H max: 83 kHz, PixClock max 170 MHz (II) fglrx(0): EDID (in hex): (II) fglrx(0): 00ffffffffffff0010ac38a055465430 (II) fglrx(0): 3312010380351e78eeee91a3544c9926 (II) fglrx(0): 0f5054a54b00714f8180d1c001010101 (II) fglrx(0): 010101010101023a801871382d40582c (II) fglrx(0): 4500132a2100001e000000ff00593138 (II) fglrx(0): 3344384346305446550a000000fc0044 (II) fglrx(0): 454c4c205332343039570a20000000fd (II) fglrx(0): 00324c1e5311000a2020202020200059 (II) fglrx(0): End of Display0 EDID data -------------------- (II) fglrx(0): Output DFP2 has no monitor section (II) fglrx(0): Output DFP_EXTTMDS has no monitor section (II) fglrx(0): Output CRT1 has no monitor section (II) fglrx(0): Output CRT2 has no monitor section (II) fglrx(0): Output DFP2 disconnected (II) fglrx(0): Output DFP_EXTTMDS connected (II) fglrx(0): Output CRT1 disconnected (II) fglrx(0): Output CRT2 disconnected (II) fglrx(0): Using exact sizes for initial modes (II) fglrx(0): Output DFP_EXTTMDS using initial mode 1920x1080 (II) fglrx(0): DPI set to (96, 96) (II) fglrx(0): Adapter ATI Radeon HD 4800 Series has 2 configurable heads and 1 displays connected. (==) fglrx(0): QBS disabled (==) fglrx(0): PseudoColor visuals disabled (II) Loading sub module "ramdac" (II) LoadModule: "ramdac" (II) Module "ramdac" already built-in (==) fglrx(0): NoAccel = NO (==) fglrx(0): NoDRI = NO (==) fglrx(0): Capabilities: 0x00000000 (==) fglrx(0): CapabilitiesEx: 0x00000000 (==) fglrx(0): OpenGL ClientDriverName: "fglrx_dri.so" (==) fglrx(0): UseFastTLS=0 (==) fglrx(0): BlockSignalsOnLock=1 (--) Depth 24 pixmap format is 32 bpp (II) Loading extension ATIFGLRXDRI (II) fglrx(0): doing swlDriScreenInit (II) fglrx(0): swlDriScreenInit for fglrx driver ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 17, (OK) ukiOpenByBusid: ukiOpenMinor returns 17 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) fglrx(0): [uki] DRM interface version 1.0 (II) fglrx(0): [uki] created "fglrx" driver at busid "PCI:1:0:0" (II) fglrx(0): [uki] added 8192 byte SAREA at 0x2000 (II) fglrx(0): [uki] mapped SAREA 0x2000 to 0xb6996000 (II) fglrx(0): [uki] framebuffer handle = 0x3000 (II) fglrx(0): [uki] added 1 reserved context for kernel (II) fglrx(0): swlDriScreenInit done (II) fglrx(0): Kernel Module Version Information: (II) fglrx(0): Name: fglrx (II) fglrx(0): Version: 8.72.11 (II) fglrx(0): Date: Apr 8 2010 (II) fglrx(0): Desc: ATI FireGL DRM kernel module (II) fglrx(0): Kernel Module version matches driver. (II) fglrx(0): Kernel Module Build Time Information: (II) fglrx(0): Build-Kernel UTS_RELEASE: 2.6.32-22-generic-pae (II) fglrx(0): Build-Kernel MODVERSIONS: yes (II) fglrx(0): Build-Kernel __SMP__: yes (II) fglrx(0): Build-Kernel PAGE_SIZE: 0x1000 (II) fglrx(0): [uki] register handle = 0x00004000 (II) fglrx(0): DRI initialization successfull! (II) fglrx(0): FBADPhys: 0xf00000000 FBMappedSize: 0x01068000 (II) fglrx(0): FBMM initialized for area (0,0)-(1920,2240) (II) fglrx(0): FBMM auto alloc for area (0,0)-(1920,1920) (front color buffer - assumption) (II) fglrx(0): Largest offscreen area available: 1920 x 320 (==) fglrx(0): Backing store disabled (II) Loading extension FGLRXEXTENSION (==) fglrx(0): DPMS enabled (II) fglrx(0): Initialized in-driver Xinerama extension (**) fglrx(0): Textured Video is enabled. (II) LoadModule: "glesx" (II) Loading /usr/lib/xorg/extra-modules/modules/glesx.so (II) Module glesx: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension GLESX (II) Loading sub module "xaa" (II) LoadModule: "xaa" (II) Loading /usr/lib/xorg/modules/libxaa.so (II) Module xaa: vendor="X.Org Foundation" compiled for 1.7.6, module version = 1.2.1 ABI class: X.Org Video Driver, version 6.0 (II) fglrx(0): GLESX enableFlags = 94 (II) fglrx(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles Solid Horizontal and Vertical Lines Driver provided ScreenToScreenBitBlt replacement Driver provided FillSolidRects replacement (II) fglrx(0): GLESX is enabled (II) LoadModule: "amdxmm" (II) Loading /usr/lib/xorg/extra-modules/modules/amdxmm.so (II) Module amdxmm: vendor="X.Org Foundation" compiled for 1.7.1, module version = 1.0.0 (II) Loading extension AMDXVOPL (II) fglrx(0): UVD2 feature is available (II) fglrx(0): Enable composite support successfully (II) fglrx(0): X context handle = 0x1 (II) fglrx(0): [DRI] installation complete (==) fglrx(0): Silken mouse enabled (==) fglrx(0): Using HW cursor of display infrastructure! (II) fglrx(0): Disabling in-server RandR and enabling in-driver RandR 1.2. (--) RandR disabled (II) Found 2 VGA devices: arbiter wrapping enabled (II) Initializing built-in extension Generic Event Extension (II) Initializing built-in extension SHAPE (II) Initializing built-in extension MIT-SHM (II) Initializing built-in extension XInputExtension (II) Initializing built-in extension XTEST (II) Initializing built-in extension BIG-REQUESTS (II) Initializing built-in extension SYNC (II) Initializing built-in extension XKEYBOARD (II) Initializing built-in extension XC-MISC (II) Initializing built-in extension SECURITY (II) Initializing built-in extension XINERAMA (II) Initializing built-in extension XFIXES (II) Initializing built-in extension RENDER (II) Initializing built-in extension RANDR (II) Initializing built-in extension COMPOSITE (II) Initializing built-in extension DAMAGE ukiDynamicMajor: found major device number 251 ukiDynamicMajor: found major device number 251 ukiOpenByBusid: Searching for BusID PCI:1:0:0 ukiOpenDevice: node name is /dev/ati/card0 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:2:0:0 ukiOpenDevice: node name is /dev/ati/card1 ukiOpenDevice: open result is 18, (OK) ukiOpenByBusid: ukiOpenMinor returns 18 ukiOpenByBusid: ukiGetBusid reports PCI:1:0:0 (II) AIGLX: Loaded and initialized /usr/lib/dri/fglrx_dri.so (II) GLX: Initialized DRI GL provider for screen 0 (II) fglrx(0): Enable the clock gating! (II) fglrx(0): Setting screen physical size to 507 x 285 (II) XKB: reuse xkmfile /var/lib/xkb/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm (II) config/udev: Adding input device Power Button (/dev/input/event1) (**) Power Button: Applying InputClass "evdev keyboard catchall" (II) LoadModule: "evdev" (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so (II) Module evdev: vendor="X.Org Foundation" compiled for 1.7.6, module version = 2.3.2 Module class: X.Org XInput Driver ABI class: X.Org XInput driver, version 7.0 (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event1" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Power Button (/dev/input/event0) (**) Power Button: Applying InputClass "evdev keyboard catchall" (**) Power Button: always reports core events (**) Power Button: Device: "/dev/input/event0" (II) Power Button: Found keys (II) Power Button: Configuring as keyboard (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/event3) (**) Logitech USB-PS/2 Optical Mouse: Applying InputClass "evdev pointer catchall" (**) Logitech USB-PS/2 Optical Mouse: always reports core events (**) Logitech USB-PS/2 Optical Mouse: Device: "/dev/input/event3" (II) Logitech USB-PS/2 Optical Mouse: Found 12 mouse buttons (II) Logitech USB-PS/2 Optical Mouse: Found scroll wheel(s) (II) Logitech USB-PS/2 Optical Mouse: Found relative axes (II) Logitech USB-PS/2 Optical Mouse: Found x and y relative axes (II) Logitech USB-PS/2 Optical Mouse: Configuring as mouse (**) Logitech USB-PS/2 Optical Mouse: YAxisMapping: buttons 4 and 5 (**) Logitech USB-PS/2 Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Logitech USB-PS/2 Optical Mouse" (type: MOUSE) (II) Logitech USB-PS/2 Optical Mouse: initialized for relative axes. (II) config/udev: Adding input device Logitech USB-PS/2 Optical Mouse (/dev/input/mouse1) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event4) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event4" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device Logitech USB Multimedia Keyboard (/dev/input/event5) (**) Logitech USB Multimedia Keyboard: Applying InputClass "evdev keyboard catchall" (**) Logitech USB Multimedia Keyboard: always reports core events (**) Logitech USB Multimedia Keyboard: Device: "/dev/input/event5" (II) Logitech USB Multimedia Keyboard: Found keys (II) Logitech USB Multimedia Keyboard: Configuring as keyboard (II) XINPUT: Adding extended input device "Logitech USB Multimedia Keyboard" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event6) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event6" (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as keyboard (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (II) config/udev: Adding input device KEYBOARD (/dev/input/event7) (**) KEYBOARD: Applying InputClass "evdev keyboard catchall" (**) KEYBOARD: always reports core events (**) KEYBOARD: Device: "/dev/input/event7" (II) KEYBOARD: Found 14 mouse buttons (II) KEYBOARD: Found scroll wheel(s) (II) KEYBOARD: Found relative axes (II) KEYBOARD: Found keys (II) KEYBOARD: Configuring as mouse (II) KEYBOARD: Configuring as keyboard (**) KEYBOARD: YAxisMapping: buttons 4 and 5 (**) KEYBOARD: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "KEYBOARD" (type: KEYBOARD) (**) Option "xkb_rules" "evdev" (**) Option "xkb_model" "pc105" (**) Option "xkb_layout" "us" (EE) KEYBOARD: failed to initialize for relative axes. (II) config/udev: Adding input device KEYBOARD (/dev/input/mouse2) (II) No input driver/identifier specified (ignoring) (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/event2) (**) Macintosh mouse button emulation: Applying InputClass "evdev pointer catchall" (**) Macintosh mouse button emulation: always reports core events (**) Macintosh mouse button emulation: Device: "/dev/input/event2" (II) Macintosh mouse button emulation: Found 3 mouse buttons (II) Macintosh mouse button emulation: Found relative axes (II) Macintosh mouse button emulation: Found x and y relative axes (II) Macintosh mouse button emulation: Configuring as mouse (**) Macintosh mouse button emulation: YAxisMapping: buttons 4 and 5 (**) Macintosh mouse button emulation: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 (II) XINPUT: Adding extended input device "Macintosh mouse button emulation" (type: MOUSE) (II) Macintosh mouse button emulation: initialized for relative axes. (II) config/udev: Adding input device Macintosh mouse button emulation (/dev/input/mouse0) (II) No input driver/identifier specified (ignoring) (II) fglrx(0): Restoring Recent Mode via PCS is not supported in RANDR 1.2 capable environments

    Read the article

  • OpenVPN - Windows 8 to Windows 2008 Server, not connecting

    - by niico
    I have followed this tutorial about setting up an OpenVPN Server on Windows Server - and a client on Windows (in this case Windows 8). The server appears to be running fine - but it is not connecting with this error: Mon Jul 22 19:09:04 2013 Warning: cannot open --log file: C:\Program Files\OpenVPN\log\my-laptop.log: Access is denied. (errno=5) Mon Jul 22 19:09:04 2013 OpenVPN 2.3.2 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [eurephia] [IPv6] built on Jun 3 2013 Mon Jul 22 19:09:04 2013 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:04 2013 Need hold release from management interface, waiting... Mon Jul 22 19:09:05 2013 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'state on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'log all on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold off' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold release' Mon Jul 22 19:09:05 2013 Socket Buffers: R=[65536->65536] S=[65536->65536] Mon Jul 22 19:09:05 2013 UDPv4 link local: [undef] Mon Jul 22 19:09:05 2013 UDPv4 link remote: [AF_INET]66.666.66.666:9999 Mon Jul 22 19:09:05 2013 MANAGEMENT: >STATE:1374494945,WAIT,,, Mon Jul 22 19:10:05 2013 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Mon Jul 22 19:10:05 2013 TLS Error: TLS handshake failed Mon Jul 22 19:10:05 2013 SIGUSR1[soft,tls-error] received, process restarting Mon Jul 22 19:10:05 2013 MANAGEMENT: >STATE:1374495005,RECONNECTING,tls-error,, Mon Jul 22 19:10:05 2013 Restart pause, 2 second(s) Note I have changed the IP and port no (it uses a non-standard port for security reasons). That port is open on the hardware firewall. The server logs are showing a connection attempt from my client: TLS: Initial packet from [AF_INET]118.68.xx.xx:65011, sid=081af4ed xxxxxxxx Mon Jul 22 14:19:15 2013 118.68.xx.xx:65011 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) How can I problem solve this & find the problem? Thx Update - Client config file: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote 00.00.00.00 1194 ;remote 00.00.00.00 9999 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\my-laptop.crt" key "C:\\Program Files\\OpenVPN\\config\\my-laptop.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Server config file: ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) ;local 00.00.00.00 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. std 1194 port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\server.crt" key "C:\\Program Files\\OpenVPN\\config\\server.key" # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh "C:\\Program Files\\OpenVPN\\config\\dh2048.pem" # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow differenta # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nobody # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I have changed IP's for security

    Read the article

  • .htaccess not working (mod_rewrite)

    - by Mike Curry
    Edit: I am pretty sure my .htaccess file is NOT being executed, and the problem is NOT with my rewrite rules. I have not having any luck getting my .htaccess with mod_rewrite working. Basically all I am trying to do is remove 'www' from "http://www.site.com" and "https://www.site.com". If there is anything I am missing (conf files, etc let me know I willl update this) I jsut can't see whats wrong here... I am using a 1&1 VPS III Virtual private server... anyone ever have this issue? I am using Ubuntu 8.04 Server LTS. Here is my .htaccess file (located @ /var/www/site/trunk/html/) Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.*) [NC] RewriteRule (.*) //%1/$1 [L,R=301] My mod_rewrite is enabled: The auto regenerated sym link is there in mods-available and /usr/lib/apache2/modules/ contains mod_rewrite.so root@s15348441:/etc/apache2/mods-available# more rewrite.load LoadModule rewrite_module /usr/lib/apache2/modules/mod_rewrite.so root@s15348441:/var/log# apache2ctl -t -D DUMP_MODULES Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK My apache config files: apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minor | Minimal | Major | Prod # where Full conveys the most information, and Prod the least. # ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # ServerSignature On # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /usr/share/apache2/error/include/ files and copying them to /your/include/path/, # even on a per-VirtualHost basis. The default include files will display # your Apache version number and your ServerAdmin email address regardless # of the setting of ServerSignature. # # The internationalized error documents require mod_alias, mod_include # and mod_negotiation. To activate them, uncomment the following 30 lines. # Alias /error/ "/usr/share/apache2/error/" # # <Directory "/usr/share/apache2/error"> # AllowOverride None # Options IncludesNoExec # AddOutputFilter Includes html # AddHandler type-map var # Order allow,deny # Allow from all # LanguagePriority en cs de es fr it nl sv pt-br ro # ForceLanguagePriority Prefer Fallback # </Directory> # # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ My default config file for www on apache NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] #SSLEnable #SSLVerifyClient none #SSLCertificateFile /usr/local/ssl/crt/public.crt #SSLCertificateKeyFile /usr/local/ssl/private/private.key DocumentRoot /var/www/site/trunk/html <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/site/trunk/html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My ssl config file NameVirtualHost *:443 <VirtualHost *:443> ServerAdmin [email protected] #SSLEnable #SSLVerifyClient none #SSLCertificateFile /usr/local/ssl/crt/public.crt #SSLCertificateKeyFile /usr/local/ssl/private/private.key DocumentRoot /var/www/site/trunk/html <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/site/trunk/html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn SSLEngine On SSLCertificateFile /usr/local/ssl/crt/public.crt SSLCertificateKeyFile /usr/local/ssl/private/private.key CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My /etc/apache2/httpd.conf is blank The directory /etc/apache2/conf.d has nothing in it but one file (charset) contents of /etc/apache2/conf.dcharset # Read the documentation before enabling AddDefaultCharset. # In general, it is only a good idea if you know that all your files # have this encoding. It will override any encoding given in the files # in meta http-equiv or xml encoding tags. #AddDefaultCharset UTF-8 My apache error.log [Wed Jun 03 00:12:31 2009] [error] [client 216.168.43.234] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Jun 03 05:03:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:03:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:48 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:57 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:17:28 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:26:23 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 05:26:34 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 06:03:41 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 06:03:51 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 06:25:07 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 06:25:17 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 12:09:25 2009] [error] [client 61.139.105.163] File does not exist: /var/www/site/trunk/html/fastenv [Wed Jun 03 15:04:42 2009] [notice] Graceful restart requested, doing restart [Wed Jun 03 15:04:43 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 15:29:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:29:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:30:32 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:45:54 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 15:46:05 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations

    Read the article

  • App cannot start at all in Android 2.2 (Froyo)

    - by Roland Lim
    Dear fellow Android developers & Google Engineers, My app has been running okay until the recent Froyo update. After installing the Android 2.2 SDK, I can compile my code without any errors. However, when I run it, it just force closes: Here's the log: 05-23 10:15:13.463: DEBUG/AndroidRuntime(423): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 05-23 10:15:13.463: DEBUG/AndroidRuntime(423): CheckJNI is ON 05-23 10:15:14.193: DEBUG/AndroidRuntime(423): --- registering native functions --- 05-23 10:15:15.293: DEBUG/AndroidRuntime(423): Shutting down VM 05-23 10:15:15.303: DEBUG/dalvikvm(423): Debugger has detached; object registry had 1 entries 05-23 10:15:15.333: INFO/AndroidRuntime(423): NOTE: attach of thread 'Binder Thread #3' failed 05-23 10:15:16.003: DEBUG/AndroidRuntime(431): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 05-23 10:15:16.013: DEBUG/AndroidRuntime(431): CheckJNI is ON 05-23 10:15:16.273: DEBUG/AndroidRuntime(431): --- registering native functions --- 05-23 10:15:17.392: INFO/ActivityManager(59): Starting activity: Intent { act=android.intent.action.MAIN cat= [android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.handyapps.easymoney/.EasyMoney } 05-23 10:15:17.602: DEBUG/AndroidRuntime(431): Shutting down VM 05-23 10:15:17.662: DEBUG/dalvikvm(431): Debugger has detached; object registry had 1 entries 05-23 10:15:17.742: INFO/AndroidRuntime(431): NOTE: attach of thread 'Binder Thread #3' failed 05-23 10:15:17.912: INFO/ActivityManager(59): Start proc com.handyapps.easymoney for activity com.handyapps.easymoney/.EasyMoney: pid=438 uid=10035 gids={1006, 1015} 05-23 10:15:19.032: DEBUG/AndroidRuntime(438): Shutting down VM 05-23 10:15:19.032: WARN/dalvikvm(438): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): FATAL EXCEPTION: main 05-23 10:15:19.062: ERROR/AndroidRuntime(438): java.lang.RuntimeException: Unable to instantiate application com.handyapps.easymoney.EasyMoney: java.lang.ClassCastException: com.handyapps.easymoney.EasyMoney 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$PackageInfo.makeApplication (ActivityThread.java:649) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.handleBindApplication (ActivityThread.java:4232) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.access$3000(ActivityThread.java:125) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2071) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.os.Handler.dispatchMessage(Handler.java:99) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.os.Looper.loop(Looper.java:123) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.main(ActivityThread.java:4627) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at java.lang.reflect.Method.invokeNative(Native Method) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at java.lang.reflect.Method.invoke(Method.java:521) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:868) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at dalvik.system.NativeStart.main(Native Method) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): Caused by: java.lang.ClassCastException: com.handyapps.easymoney.EasyMoney 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.Instrumentation.newApplication(Instrumentation.java:957) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.Instrumentation.newApplication(Instrumentation.java:942) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$PackageInfo.makeApplication (ActivityThread.java:644) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): ... 11 more 05-23 10:15:19.082: WARN/ActivityManager(59): Force finishing activity com.handyapps.easymoney/.EasyMoney 05-23 10:15:19.592: WARN/ActivityManager(59): Activity pause timeout for HistoryRecord{450018f0 com.handyapps.easymoney/.EasyMoney} //////////////THE ANDROID MANIFEST FILE//// <uses-permission android:name="android.permission.READ_PHONE_STATE"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-feature android:name="android.hardware.camera" /> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="4" /> <application android:icon="@drawable/icon" android:name="@string/app_name" android:label="@string/app_name" android:debuggable="false"> <activity android:name=".EasyMoney" android:label="@string/app_name" android:theme="@android:style/Theme.NoTitleBar" android:launchMode="singleTask" android:clearTaskOnLaunch="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".TranList" android:label="@string/app_name" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".TranEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BillReminderEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BillReminderList" android:launchMode="singleTop" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BudgetList" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BudgetEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".Search" android:theme="@style/CustomDialogTheme" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".PasscodeEntry" android:theme="@style/CustomDialogTheme" android:windowSoftInputMode="stateAlwaysHidden" android:screenOrientation="portrait"/> <activity android:name=".AccountList" android:theme="@android:style/Theme.Light.NoTitleBar"> </activity> <activity android:name=".AccountEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".UserSettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".CurrencySettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".DisplaySettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BackupSettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".CategoryList" android:theme="@android:style/Theme.Light.NoTitleBar" /> <activity android:name=".CategoryEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".ExpenseByCategory" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BalanceReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyExpenseReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyIncomeReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyCashflowReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".PhotoList" android:theme="@android:style/Theme.Light.NoTitleBar" /> <activity android:name=".ExpenseByPayee" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".ExpenseBySubCategory" android:theme="@android:style/Theme.Light.NoTitleBar"/> <service android:name="StartAlarm_Service"> <intent-filter> <action android:name="com.handyapps.easymoney.StartAlarm_Service" /> </intent-filter> </service> <service android:name=".AlarmService_Service" android:process=":remote" /> <receiver android:name="StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <category android:name="android.intent.category.HOME" /> </intent-filter> </receiver> <receiver android:name=".WidgetProvider" android:label="@string/widget_name"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget" /> </receiver> <receiver android:name=".WidgetProvider" android:label="@string/widget_name"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> <data android:scheme="easymoney_widget" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget" /> </receiver> <receiver android:name=".WidgetProvider"> <intent-filter> <action android:name="com.handyapps.easymoney.WIDGET_CONTROL" /> <data android:scheme="easymoney_widget" /> </intent-filter> </receiver> </application> The main startup class is com.handyapps.easymoney.EasyMoney. I placed a breakpoint at the start of the onCreate() method but I discovered it didn't even reach there. Somehow, the application just couldn't be loaded in Android 2.2... but it works perfectly fine for all the previous Android versions. Been trying to find the cause for the past 2 days but am totally stumped!! Any help will be greatly appreciated!!!! Thanks!! Roland

    Read the article

  • Inheritance of templates in WPF

    - by Alxandr
    I'm trying to make sure that every child of a given element (MPF.MWindow) gets custom templates. For instance, the button should get the template defined in resMButton.xaml. As of now I'm using the following code on: (resMWindow.xaml) <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:MPF"> <Style x:Key="SystemKeyAnimations" TargetType="{x:Type Button}"> <Setter Property="Opacity" Value="0.5" /> <Setter Property="Background" Value="Transparent" /> <Style.Triggers> <EventTrigger RoutedEvent="Mouse.MouseEnter"> <BeginStoryboard> <Storyboard> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="Opacity"> <SplineDoubleKeyFrame KeyTime="00:00:00.2" Value="1.0" /> </DoubleAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger> <EventTrigger RoutedEvent="Mouse.MouseLeave"> <BeginStoryboard> <Storyboard> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetProperty="Opacity"> <SplineDoubleKeyFrame KeyTime="00:00:00.2" Value="0.5" /> </DoubleAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger> </Style.Triggers> </Style> <Style TargetType="{x:Type local:MWindow}"> <!-- Remove default frame appearance --> <Setter Property="WindowStyle" Value="None" /> <Setter Property="AllowsTransparency" Value="True" /> <Setter Property="Background" Value="Transparent" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:MWindow}"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" x:Name="ChromeBorder"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="4" /> <ColumnDefinition /> <ColumnDefinition Width="4" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="4" /> <RowDefinition /> <RowDefinition Height="4" /> </Grid.RowDefinitions> <Thumb Grid.Row="0" Grid.Column="1" x:Name="TopThumb" Cursor="SizeNS" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="2" Grid.Column="1" x:Name="BottomThumb" Cursor="SizeNS" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="1" Grid.Column="0" x:Name="LeftThumb" Cursor="SizeWE" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="1" Grid.Column="2" x:Name="RightThumb" Cursor="SizeWE" BorderThickness="4" BorderBrush="Transparent" /> <Thumb Grid.Row="0" Grid.Column="0" x:Name="TopLeftThumb" Cursor="SizeNWSE" BorderThickness="5" BorderBrush="Transparent" /> <Thumb Grid.Row="0" Grid.Column="2" x:Name="TopRightThumb" Cursor="SizeNESW" BorderThickness="5" BorderBrush="Transparent" /> <Thumb Grid.Row="2" Grid.Column="0" x:Name="BottomLeftThumb" Cursor="SizeNESW" BorderThickness="5" BorderBrush="Transparent" /> <Thumb Grid.Row="2" Grid.Column="2" x:Name="BottomRightThumb" Cursor="SizeNWSE" BorderThickness="5" BorderBrush="Transparent" /> <Grid Grid.Row="1" Grid.Column="1"> <Grid.RowDefinitions> <RowDefinition Height="20" /> <RowDefinition /> </Grid.RowDefinitions> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <StackPanel Orientation="Horizontal" Grid.Column="1"> <Button Command="local:WindowCommands.Minimize" Style="{StaticResource ResourceKey=SystemKeyAnimations}"> <Button.Template> <ControlTemplate> <Canvas Width="10" Height="10" Margin="5" Background="Transparent"> <Line X1="0" X2="10" Y1="5" Y2="5" Stroke="White" StrokeThickness="2" /> </Canvas> </ControlTemplate> </Button.Template> </Button> <Button Command="local:WindowCommands.Maximize" x:Name="MaximizeButton" Style="{StaticResource ResourceKey=SystemKeyAnimations}"> <Button.Template> <ControlTemplate> <Canvas Width="10" Height="10" Margin="5" Background="Transparent"> <Rectangle Width="10" Height="10" Stroke="White" StrokeThickness="2" /> </Canvas> </ControlTemplate> </Button.Template> </Button> <Button Command="ApplicationCommands.Close" Style="{StaticResource ResourceKey=SystemKeyAnimations}"> <Button.Template> <ControlTemplate> <Canvas Width="10" Height="10" Margin="5" Background="Transparent"> <Line X1="0" X2="10" Y1="0" Y2="10" Stroke="White" StrokeThickness="2" /> <Line X1="10" X2="0" Y1="0" Y2="10" Stroke="White" StrokeThickness="2" /> </Canvas> </ControlTemplate> </Button.Template> </Button> </StackPanel> <ContentControl x:Name="TitleContentControl"> <TextBlock Text="{TemplateBinding Title}" Foreground="DarkGray" Margin="5,0" /> </ContentControl> </Grid> <ContentPresenter Content="{TemplateBinding Content}" Grid.Row="1"> <ContentPresenter.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/MPF;component/Themes/resMWindowContent.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </ContentPresenter.Resources> </ContentPresenter> </Grid> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> As you can see during the ContentPresenter which gets the content of the window I merge in a dicrionary called resMWindowContent.xaml. The resMWindowContent.xaml looks as following: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:MPF"> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/MPF;component/Themes/resMButton.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> It simply merges in the resMButton.xaml dictionary (this is done because in the feature I will have MTextBox, mList... and I want to separate them). The resMButton.xaml looks as following: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:MPF"> <Style TargetType="{x:Type Button}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Grid Background="Transparent"> <Rectangle Stroke="{TemplateBinding BorderBrush}" StrokeThickness="{TemplateBinding BorderThickness}" Fill="{TemplateBinding Background}" /> <ContentPresenter Content="{TemplateBinding Content}" Margin="3" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> A simple template drawing a square button. However, it isn't applied at all. My buttons remain normal, and I don't understand what I'm doing wrong. I just want every button inside the MWindow to get a special style (and in time every textbox and so forth). How do I achieve this? One note though: It's important that the styles doesn't apply to elements outside an MWindow.

    Read the article

  • Google Maps API v3 - Different markers/labels on different zoom levels

    - by krikara
    I was wondering if it is possible that Google has a feature to view different markers on different zoom levels. For example, on zoom level 1, I want one marker over China with the label saying "5". And as the user zooms in, lets say on zoom level 4, I want the previous marker and label to disappear. And I want to have 5 new markers/labels, each on a different city in China all saying "1". Thus China will say a number and all the cities in China will say numbers adding up to China's number. The key concept I am trying to figure out here is how to hide markers and labels based on zoom levels. A constraint for me is that I am living in China currently where google is censored, so a lot of online documents are censored for me, including many of google's documentations. Here is my code thus far <!DOCTYPE html> <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <title>TM China</title> <style type="text/css"> html, body, #map_canvas { margin: 0; padding: 0; height: 100% } .labels { color: red; background-color: white; font-family: "Lucida Grande", "Arial", sans-serif; font-size: 10px; font-weight: bold; text-align: center; width: 60px; border: 2px solid black; white-space: nowrap; } </style> <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=AIzaSyDV0lcdK7C2GHbQAmdkBID70Uppuf-D030&sensor=true"> </script> <script type="text/javascript"> eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('7 m(a){2.3=a;2.8=V.1E("1u");2.8.4.C="I: 1m; J: 1g;";2.k=V.1E("1u");2.k.4.C=2.8.4.C}m.l=E 6.5.22();m.l.1Y=7(){n c=2;n h=t;n f=t;n j;n b;n d,K;n i;n g=7(e){p(e.1v){e.1v()}e.2b=u;p(e.1t){e.1t()}};2.1s().24.G(2.8);2.1s().20.G(2.k);2.11=[6.5.9.w(V,"1o",7(a){p(f){a.s=j;i=u;6.5.9.r(c.3,"1n",a)}h=t;6.5.9.r(c.3,"1o",a)}),6.5.9.o(c.3.1P(),"1N",7(a){p(h&&c.3.1M()){a.s=E 6.5.1J(a.s.U()-d,a.s.T()-K);j=a.s;p(f){6.5.9.r(c.3,"1i",a)}F{d=a.s.U()-c.3.Z().U();K=a.s.T()-c.3.Z().T();6.5.9.r(c.3,"1e",a)}}}),6.5.9.w(2.k,"1d",7(e){c.k.4.1c="2i";6.5.9.r(c.3,"1d",e)}),6.5.9.w(2.k,"1D",7(e){c.k.4.1c=c.3.2g();6.5.9.r(c.3,"1D",e)}),6.5.9.w(2.k,"1C",7(e){p(i){i=t}F{g(e);6.5.9.r(c.3,"1C",e)}}),6.5.9.w(2.k,"1A",7(e){g(e);6.5.9.r(c.3,"1A",e)}),6.5.9.w(2.k,"1z",7(e){h=u;f=t;d=0;K=0;g(e);6.5.9.r(c.3,"1z",e)}),6.5.9.o(2.3,"1e",7(a){f=u;b=c.3.1b()}),6.5.9.o(2.3,"1i",7(a){c.3.O(a.s);c.3.D(2a)}),6.5.9.o(2.3,"1n",7(a){f=t;c.3.D(b)}),6.5.9.o(2.3,"29",7(){c.O()}),6.5.9.o(2.3,"28",7(){c.D()}),6.5.9.o(2.3,"27",7(){c.N()}),6.5.9.o(2.3,"26",7(){c.N()}),6.5.9.o(2.3,"25",7(){c.16()}),6.5.9.o(2.3,"23",7(){c.15()}),6.5.9.o(2.3,"21",7(){c.13()}),6.5.9.o(2.3,"1Z",7(){c.L()}),6.5.9.o(2.3,"1X",7(){c.L()})]};m.l.1W=7(){n i;2.8.1r.1q(2.8);2.k.1r.1q(2.k);1p(i=0;i<2.11.1V;i++){6.5.9.1U(2.11[i])}};m.l.1T=7(){2.15();2.16();2.L()};m.l.15=7(){n a=2.3.z("Y");p(H a.1S==="P"){2.8.W=a;2.k.W=2.8.W}F{2.8.G(a);a=a.1R(u);2.k.G(a)}};m.l.16=7(){2.k.1Q=2.3.1O()||""};m.l.L=7(){n i,q;2.8.S=2.3.z("R");2.k.S=2.8.S;2.8.4.C="";2.k.4.C="";q=2.3.z("q");1p(i 1L q){p(q.1K(i)){2.8.4[i]=q[i];2.k.4[i]=q[i]}}2.1l()};m.l.1l=7(){2.8.4.I="1m";2.8.4.J="1g";p(H 2.8.4.B!=="P"){2.8.4.1k="1j(B="+(2.8.4.B*1I)+")"}2.k.4.I=2.8.4.I;2.k.4.J=2.8.4.J;2.k.4.B=0.1H;2.k.4.1k="1j(B=1)";2.13();2.O();2.N()};m.l.13=7(){n a=2.3.z("X");2.8.4.1h=-a.x+"v";2.8.4.1f=-a.y+"v";2.k.4.1h=-a.x+"v";2.k.4.1f=-a.y+"v"};m.l.O=7(){n a=2.1G().1F(2.3.Z());2.8.4.12=a.x+"v";2.8.4.M=a.y+"v";2.k.4.12=2.8.4.12;2.k.4.M=2.8.4.M;2.D()};m.l.D=7(){n a=(2.3.z("14")?-1:+1);p(H 2.3.1b()==="P"){2.8.4.A=2h(2.8.4.M,10)+a;2.k.4.A=2.8.4.A}F{2.8.4.A=2.3.1b()+a;2.k.4.A=2.8.4.A}};m.l.N=7(){p(2.3.z("1a")){2.8.4.Q=2.3.2f()?"2e":"1B"}F{2.8.4.Q="1B"}2.k.4.Q=2.8.4.Q};7 19(a){a=a||{};a.Y=a.Y||"";a.X=a.X||E 6.5.2d(0,0);a.R=a.R||"2c";a.q=a.q||{};a.14=a.14||t;p(H a.1a==="P"){a.1a=u}2.1y=E m(2);6.5.18.1x(2,1w)}19.l=E 6.5.18();19.l.17=7(a){6.5.18.l.17.1x(2,1w);2.1y.17(a)};',62,143,'||this|marker_|style|maps|google|function|labelDiv_|event|||||||||||eventDiv_|prototype|MarkerLabel_|var|addListener|if|labelStyle|trigger|latLng|false|true|px|addDomListener|||get|zIndex|opacity|cssText|setZIndex|new|else|appendChild|typeof|position|overflow|cLngOffset|setStyles|top|setVisible|setPosition|undefined|display|labelClass|className|lng|lat|document|innerHTML|labelAnchor|labelContent|getPosition||listeners_|left|setAnchor|labelInBackground|setContent|setTitle|setMap|Marker|MarkerWithLabel|labelVisible|getZIndex|cursor|mouseover|dragstart|marginTop|hidden|marginLeft|drag|alpha|filter|setMandatoryStyles|absolute|dragend|mouseup|for|removeChild|parentNode|getPanes|stopPropagation|div|preventDefault|arguments|apply|label|mousedown|dblclick|none|click|mouseout|createElement|fromLatLngToDivPixel|getProjection|01|100|LatLng|hasOwnProperty|in|getDraggable|mousemove|getTitle|getMap|title|cloneNode|nodeType|draw|removeListener|length|onRemove|labelstyle_changed|onAdd|labelclass_changed|overlayMouseTarget|labelanchor_changed|OverlayView|labelcontent_changed|overlayImage|title_changed|labelvisible_changed|visible_changed|zindex_changed|position_changed|1000000|cancelBubble|markerLabels|Point|block|getVisible|getCursor|parseInt|pointer'.split('|'),0,{})) var map; var mapOptions = { center: new google.maps.LatLng(35, 105), zoom: 3, mapTypeId: google.maps.MapTypeId.ROADMAP }; var locations = [ ['Hong Kong', 22.39, 114.10, 1885], ['Shanghai', 31.232, 121.47, 5885], ['Beijing', 39.88, 116.40, 6426], ['Guangzhou', 23.129, 113.264, 4067], ['Shenzhen', 22.54, 114.05, 3089], ['Hangzhou', 30.27, 120.15, 954] ]; var infowindow = new google.maps.InfoWindow(); var i; /* for (i = 0; i < locations.length; i++) { marker = new google.maps.Marker({ position: new google.maps.LatLng(locations[i][1], locations[i][2]), map: map }); google.maps.event.addListener(marker, 'click', (function(marker, i) { return function() { infowindow.setContent(locations[i][0]); infowindow.open(map, marker); } })(marker, i)); } */ function myMarker(options) { if(!options.labelAnchor) { options.labelAnchor = new google.maps.Point(30, 50); } if(!options.labelClass) { options.labelClass = "labels"; } options.map = map; return new MarkerWithLabel(options); } function initialize() { map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions); for (i = 0; i < locations.length; i++) { var marker = new MarkerWithLabel({ position: new google.maps.LatLng(locations[i][1], locations[i][2]), draggable: false, map: map, labelContent: locations[i][3], labelAnchor: new google.maps.Point(30, 0), labelClass: "labels", // the CSS class for the label labelStyle: {opacity: 0.75} }); } /* var marker2 = new myMarker({ position: new google.maps.LatLng(20,20), draggable: true, labelContent: "second" }); */ } google.maps.event.addDomListener(window, 'load', initialize); </script> </head> <body onload="initialize()"> <div id="map_canvas" style="width:85%; height:85%"></div> <script type="text/javascript"> </script> </body> </html> EDIT I have been trying to experiment with the MarkerManager, but I can't get the markers to create successfully on different zoom levels. First, I changed my default zoom level to 1, and then I changed my code to what is shown below. function initialize() { map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions); /* for (i = 0; i < locations.length; i++) { var marker = new MarkerWithLabel({ position: new google.maps.LatLng(locations[i][1], locations[i][2]), draggable: false, map: map, labelContent: locations[i][3], labelAnchor: new google.maps.Point(30, 0), labelClass: "labels", // the CSS class for the label labelStyle: {opacity: 0.75} }); } */ var listener = google.maps.event.addListener(map, 'bounds_changed', function(){ setupMarkers(); google.maps.event.removeListener(listener); }); } function createCityMarkers() { for (i = 0; i < locations.length; i++) { var marker = new MarkerWithLabel({ position: new google.maps.LatLng(locations[i][1], locations[i][2]), draggable: false, map: map, labelContent: locations[i][3], labelAnchor: new google.maps.Point(30, 0), labelClass: "labels", // the CSS class for the label labelStyle: {opacity: 0.75} }); } } function setupMarkers() { mgr = new MarkerManager(map); google.maps.event.addListener(mgr, 'loaded', function(){ mgr.addMarkers(createCityMarkers(), 4); mgr.refresh(); }); } I have also tried applying the source code of this link as well, but nothing is working out. And when I copy the source code directly to my computer and replace all the icons with markers, the markers still don't appear. I can't seem to figure how to make markers appear using the marker Manager. http://google-maps-utility-library-v3.googlecode.com/svn/tags/markermanager/1.0/examples/weather_map.html

    Read the article

  • Zen and the Art of File and Folder Organization

    - by Mark Virtue
    Is your desk a paragon of neatness, or does it look like a paper-bomb has gone off? If you’ve been putting off getting organized because the task is too huge or daunting, or you don’t know where to start, we’ve got 40 tips to get you on the path to zen mastery of your filing system. For all those readers who would like to get their files and folders organized, or, if they’re already organized, better organized—we have compiled a complete guide to getting organized and staying organized, a comprehensive article that will hopefully cover every possible tip you could want. Signs that Your Computer is Poorly Organized If your computer is a mess, you’re probably already aware of it.  But just in case you’re not, here are some tell-tale signs: Your Desktop has over 40 icons on it “My Documents” contains over 300 files and 60 folders, including MP3s and digital photos You use the Windows’ built-in search facility whenever you need to find a file You can’t find programs in the out-of-control list of programs in your Start Menu You save all your Word documents in one folder, all your spreadsheets in a second folder, etc Any given file that you’re looking for may be in any one of four different sets of folders But before we start, here are some quick notes: We’re going to assume you know what files and folders are, and how to create, save, rename, copy and delete them The organization principles described in this article apply equally to all computer systems.  However, the screenshots here will reflect how things look on Windows (usually Windows 7).  We will also mention some useful features of Windows that can help you get organized. Everyone has their own favorite methodology of organizing and filing, and it’s all too easy to get into “My Way is Better than Your Way” arguments.  The reality is that there is no perfect way of getting things organized.  When I wrote this article, I tried to keep a generalist and objective viewpoint.  I consider myself to be unusually well organized (to the point of obsession, truth be told), and I’ve had 25 years experience in collecting and organizing files on computers.  So I’ve got a lot to say on the subject.  But the tips I have described here are only one way of doing it.  Hopefully some of these tips will work for you too, but please don’t read this as any sort of “right” way to do it. At the end of the article we’ll be asking you, the reader, for your own organization tips. Why Bother Organizing At All? For some, the answer to this question is self-evident. And yet, in this era of powerful desktop search software (the search capabilities built into the Windows Vista and Windows 7 Start Menus, and third-party programs like Google Desktop Search), the question does need to be asked, and answered. I have a friend who puts every file he ever creates, receives or downloads into his My Documents folder and doesn’t bother filing them into subfolders at all.  He relies on the search functionality built into his Windows operating system to help him find whatever he’s looking for.  And he always finds it.  He’s a Search Samurai.  For him, filing is a waste of valuable time that could be spent enjoying life! It’s tempting to follow suit.  On the face of it, why would anyone bother to take the time to organize their hard disk when such excellent search software is available?  Well, if all you ever want to do with the files you own is to locate and open them individually (for listening, editing, etc), then there’s no reason to ever bother doing one scrap of organization.  But consider these common tasks that are not achievable with desktop search software: Find files manually.  Often it’s not convenient, speedy or even possible to utilize your desktop search software to find what you want.  It doesn’t work 100% of the time, or you may not even have it installed.  Sometimes its just plain faster to go straight to the file you want, if you know it’s in a particular sub-folder, rather than trawling through hundreds of search results. Find groups of similar files (e.g. all your “work” files, all the photos of your Europe holiday in 2008, all your music videos, all the MP3s from Dark Side of the Moon, all your letters you wrote to your wife, all your tax returns).  Clever naming of the files will only get you so far.  Sometimes it’s the date the file was created that’s important, other times it’s the file format, and other times it’s the purpose of the file.  How do you name a collection of files so that they’re easy to isolate based on any of the above criteria?  Short answer, you can’t. Move files to a new computer.  It’s time to upgrade your computer.  How do you quickly grab all the files that are important to you?  Or you decide to have two computers now – one for home and one for work.  How do you quickly isolate only the work-related files to move them to the work computer? Synchronize files to other computers.  If you have more than one computer, and you need to mirror some of your files onto the other computer (e.g. your music collection), then you need a way to quickly determine which files are to be synced and which are not.  Surely you don’t want to synchronize everything? Choose which files to back up.  If your backup regime calls for multiple backups, or requires speedy backups, then you’ll need to be able to specify which files are to be backed up, and which are not.  This is not possible if they’re all in the same folder. Finally, if you’re simply someone who takes pleasure in being organized, tidy and ordered (me! me!), then you don’t even need a reason.  Being disorganized is simply unthinkable. Tips on Getting Organized Here we present our 40 best tips on how to get organized.  Or, if you’re already organized, to get better organized. Tip #1.  Choose Your Organization System Carefully The reason that most people are not organized is that it takes time.  And the first thing that takes time is deciding upon a system of organization.  This is always a matter of personal preference, and is not something that a geek on a website can tell you.  You should always choose your own system, based on how your own brain is organized (which makes the assumption that your brain is, in fact, organized). We can’t instruct you, but we can make suggestions: You may want to start off with a system based on the users of the computer.  i.e. “My Files”, “My Wife’s Files”, My Son’s Files”, etc.  Inside “My Files”, you might then break it down into “Personal” and “Business”.  You may then realize that there are overlaps.  For example, everyone may want to share access to the music library, or the photos from the school play.  So you may create another folder called “Family”, for the “common” files. You may decide that the highest-level breakdown of your files is based on the “source” of each file.  In other words, who created the files.  You could have “Files created by ME (business or personal)”, “Files created by people I know (family, friends, etc)”, and finally “Files created by the rest of the world (MP3 music files, downloaded or ripped movies or TV shows, software installation files, gorgeous desktop wallpaper images you’ve collected, etc).”  This system happens to be the one I use myself.  See below:  Mark is for files created by meVC is for files created by my company (Virtual Creations)Others is for files created by my friends and familyData is the rest of the worldAlso, Settings is where I store the configuration files and other program data files for my installed software (more on this in tip #34, below). Each folder will present its own particular set of requirements for further sub-organization.  For example, you may decide to organize your music collection into sub-folders based on the artist’s name, while your digital photos might get organized based on the date they were taken.  It can be different for every sub-folder! Another strategy would be based on “currentness”.  Files you have yet to open and look at live in one folder.  Ones that have been looked at but not yet filed live in another place.  Current, active projects live in yet another place.  All other files (your “archive”, if you like) would live in a fourth folder. (And of course, within that last folder you’d need to create a further sub-system based on one of the previous bullet points). Put some thought into this – changing it when it proves incomplete can be a big hassle!  Before you go to the trouble of implementing any system you come up with, examine a wide cross-section of the files you own and see if they will all be able to find a nice logical place to sit within your system. Tip #2.  When You Decide on Your System, Stick to It! There’s nothing more pointless than going to all the trouble of creating a system and filing all your files, and then whenever you create, receive or download a new file, you simply dump it onto your Desktop.  You need to be disciplined – forever!  Every new file you get, spend those extra few seconds to file it where it belongs!  Otherwise, in just a month or two, you’ll be worse off than before – half your files will be organized and half will be disorganized – and you won’t know which is which! Tip #3.  Choose the Root Folder of Your Structure Carefully Every data file (document, photo, music file, etc) that you create, own or is important to you, no matter where it came from, should be found within one single folder, and that one single folder should be located at the root of your C: drive (as a sub-folder of C:\).  In other words, do not base your folder structure in standard folders like “My Documents”.  If you do, then you’re leaving it up to the operating system engineers to decide what folder structure is best for you.  And every operating system has a different system!  In Windows 7 your files are found in C:\Users\YourName, whilst on Windows XP it was C:\Documents and Settings\YourName\My Documents.  In UNIX systems it’s often /home/YourName. These standard default folders tend to fill up with junk files and folders that are not at all important to you.  “My Documents” is the worst offender.  Every second piece of software you install, it seems, likes to create its own folder in the “My Documents” folder.  These folders usually don’t fit within your organizational structure, so don’t use them!  In fact, don’t even use the “My Documents” folder at all.  Allow it to fill up with junk, and then simply ignore it.  It sounds heretical, but: Don’t ever visit your “My Documents” folder!  Remove your icons/links to “My Documents” and replace them with links to the folders you created and you care about! Create your own file system from scratch!  Probably the best place to put it would be on your D: drive – if you have one.  This way, all your files live on one drive, while all the operating system and software component files live on the C: drive – simply and elegantly separated.  The benefits of that are profound.  Not only are there obvious organizational benefits (see tip #10, below), but when it comes to migrate your data to a new computer, you can (sometimes) simply unplug your D: drive and plug it in as the D: drive of your new computer (this implies that the D: drive is actually a separate physical disk, and not a partition on the same disk as C:).  You also get a slight speed improvement (again, only if your C: and D: drives are on separate physical disks). Warning:  From tip #12, below, you will see that it’s actually a good idea to have exactly the same file system structure – including the drive it’s filed on – on all of the computers you own.  So if you decide to use the D: drive as the storage system for your own files, make sure you are able to use the D: drive on all the computers you own.  If you can’t ensure that, then you can still use a clever geeky trick to store your files on the D: drive, but still access them all via the C: drive (see tip #17, below). If you only have one hard disk (C:), then create a dedicated folder that will contain all your files – something like C:\Files.  The name of the folder is not important, but make it a single, brief word. There are several reasons for this: When creating a backup regime, it’s easy to decide what files should be backed up – they’re all in the one folder! If you ever decide to trade in your computer for a new one, you know exactly which files to migrate You will always know where to begin a search for any file If you synchronize files with other computers, it makes your synchronization routines very simple.   It also causes all your shortcuts to continue to work on the other machines (more about this in tip #24, below). Once you’ve decided where your files should go, then put all your files in there – Everything!  Completely disregard the standard, default folders that are created for you by the operating system (“My Music”, “My Pictures”, etc).  In fact, you can actually relocate many of those folders into your own structure (more about that below, in tip #6). The more completely you get all your data files (documents, photos, music, etc) and all your configuration settings into that one folder, then the easier it will be to perform all of the above tasks. Once this has been done, and all your files live in one folder, all the other folders in C:\ can be thought of as “operating system” folders, and therefore of little day-to-day interest for us. Here’s a screenshot of a nicely organized C: drive, where all user files are located within the \Files folder:   Tip #4.  Use Sub-Folders This would be our simplest and most obvious tip.  It almost goes without saying.  Any organizational system you decide upon (see tip #1) will require that you create sub-folders for your files.  Get used to creating folders on a regular basis. Tip #5.  Don’t be Shy About Depth Create as many levels of sub-folders as you need.  Don’t be scared to do so.  Every time you notice an opportunity to group a set of related files into a sub-folder, do so.  Examples might include:  All the MP3s from one music CD, all the photos from one holiday, or all the documents from one client. It’s perfectly okay to put files into a folder called C:\Files\Me\From Others\Services\WestCo Bank\Statements\2009.  That’s only seven levels deep.  Ten levels is not uncommon.  Of course, it’s possible to take this too far.  If you notice yourself creating a sub-folder to hold only one file, then you’ve probably become a little over-zealous.  On the other hand, if you simply create a structure with only two levels (for example C:\Files\Work) then you really haven’t achieved any level of organization at all (unless you own only six files!).  Your “Work” folder will have become a dumping ground, just like your Desktop was, with most likely hundreds of files in it. Tip #6.  Move the Standard User Folders into Your Own Folder Structure Most operating systems, including Windows, create a set of standard folders for each of its users.  These folders then become the default location for files such as documents, music files, digital photos and downloaded Internet files.  In Windows 7, the full list is shown below: Some of these folders you may never use nor care about (for example, the Favorites folder, if you’re not using Internet Explorer as your browser).  Those ones you can leave where they are.  But you may be using some of the other folders to store files that are important to you.  Even if you’re not using them, Windows will still often treat them as the default storage location for many types of files.  When you go to save a standard file type, it can become annoying to be automatically prompted to save it in a folder that’s not part of your own file structure. But there’s a simple solution:  Move the folders you care about into your own folder structure!  If you do, then the next time you go to save a file of the corresponding type, Windows will prompt you to save it in the new, moved location. Moving the folders is easy.  Simply drag-and-drop them to the new location.  Here’s a screenshot of the default My Music folder being moved to my custom personal folder (Mark): Tip #7.  Name Files and Folders Intelligently This is another one that almost goes without saying, but we’ll say it anyway:  Do not allow files to be created that have meaningless names like Document1.doc, or folders called New Folder (2).  Take that extra 20 seconds and come up with a meaningful name for the file/folder – one that accurately divulges its contents without repeating the entire contents in the name. Tip #8.  Watch Out for Long Filenames Another way to tell if you have not yet created enough depth to your folder hierarchy is that your files often require really long names.  If you need to call a file Johnson Sales Figures March 2009.xls (which might happen to live in the same folder as Abercrombie Budget Report 2008.xls), then you might want to create some sub-folders so that the first file could be simply called March.xls, and living in the Clients\Johnson\Sales Figures\2009 folder. A well-placed file needs only a brief filename! Tip #9.  Use Shortcuts!  Everywhere! This is probably the single most useful and important tip we can offer.  A shortcut allows a file to be in two places at once. Why would you want that?  Well, the file and folder structure of every popular operating system on the market today is hierarchical.  This means that all objects (files and folders) always live within exactly one parent folder.  It’s a bit like a tree.  A tree has branches (folders) and leaves (files).  Each leaf, and each branch, is supported by exactly one parent branch, all the way back to the root of the tree (which, incidentally, is exactly why C:\ is called the “root folder” of the C: drive). That hard disks are structured this way may seem obvious and even necessary, but it’s only one way of organizing data.  There are others:  Relational databases, for example, organize structured data entirely differently.  The main limitation of hierarchical filing structures is that a file can only ever be in one branch of the tree – in only one folder – at a time.  Why is this a problem?  Well, there are two main reasons why this limitation is a problem for computer users: The “correct” place for a file, according to our organizational rationale, is very often a very inconvenient place for that file to be located.  Just because it’s correctly filed doesn’t mean it’s easy to get to.  Your file may be “correctly” buried six levels deep in your sub-folder structure, but you may need regular and speedy access to this file every day.  You could always move it to a more convenient location, but that would mean that you would need to re-file back to its “correct” location it every time you’d finished working on it.  Most unsatisfactory. A file may simply “belong” in two or more different locations within your file structure.  For example, say you’re an accountant and you have just completed the 2009 tax return for John Smith.  It might make sense to you to call this file 2009 Tax Return.doc and file it under Clients\John Smith.  But it may also be important to you to have the 2009 tax returns from all your clients together in the one place.  So you might also want to call the file John Smith.doc and file it under Tax Returns\2009.  The problem is, in a purely hierarchical filing system, you can’t put it in both places.  Grrrrr! Fortunately, Windows (and most other operating systems) offers a way for you to do exactly that:  It’s called a “shortcut” (also known as an “alias” on Macs and a “symbolic link” on UNIX systems).  Shortcuts allow a file to exist in one place, and an icon that represents the file to be created and put anywhere else you please.  In fact, you can create a dozen such icons and scatter them all over your hard disk.  Double-clicking on one of these icons/shortcuts opens up the original file, just as if you had double-clicked on the original file itself. Consider the following two icons: The one on the left is the actual Word document, while the one on the right is a shortcut that represents the Word document.  Double-clicking on either icon will open the same file.  There are two main visual differences between the icons: The shortcut will have a small arrow in the lower-left-hand corner (on Windows, anyway) The shortcut is allowed to have a name that does not include the file extension (the “.docx” part, in this case) You can delete the shortcut at any time without losing any actual data.  The original is still intact.  All you lose is the ability to get to that data from wherever the shortcut was. So why are shortcuts so great?  Because they allow us to easily overcome the main limitation of hierarchical file systems, and put a file in two (or more) places at the same time.  You will always have files that don’t play nice with your organizational rationale, and can’t be filed in only one place.  They demand to exist in two places.  Shortcuts allow this!  Furthermore, they allow you to collect your most often-opened files and folders together in one spot for convenient access.  The cool part is that the original files stay where they are, safe forever in their perfectly organized location. So your collection of most often-opened files can – and should – become a collection of shortcuts! If you’re still not convinced of the utility of shortcuts, consider the following well-known areas of a typical Windows computer: The Start Menu (and all the programs that live within it) The Quick Launch bar (or the Superbar in Windows 7) The “Favorite folders” area in the top-left corner of the Windows Explorer window (in Windows Vista or Windows 7) Your Internet Explorer Favorites or Firefox Bookmarks Each item in each of these areas is a shortcut!  Each of those areas exist for one purpose only:  For convenience – to provide you with a collection of the files and folders you access most often. It should be easy to see by now that shortcuts are designed for one single purpose:  To make accessing your files more convenient.  Each time you double-click on a shortcut, you are saved the hassle of locating the file (or folder, or program, or drive, or control panel icon) that it represents. Shortcuts allow us to invent a golden rule of file and folder organization: “Only ever have one copy of a file – never have two copies of the same file.  Use a shortcut instead” (this rule doesn’t apply to copies created for backup purposes, of course!) There are also lesser rules, like “don’t move a file into your work area – create a shortcut there instead”, and “any time you find yourself frustrated with how long it takes to locate a file, create a shortcut to it and place that shortcut in a convenient location.” So how to we create these massively useful shortcuts?  There are two main ways: “Copy” the original file or folder (click on it and type Ctrl-C, or right-click on it and select Copy):  Then right-click in an empty area of the destination folder (the place where you want the shortcut to go) and select Paste shortcut: Right-drag (drag with the right mouse button) the file from the source folder to the destination folder.  When you let go of the mouse button at the destination folder, a menu pops up: Select Create shortcuts here. Note that when shortcuts are created, they are often named something like Shortcut to Budget Detail.doc (windows XP) or Budget Detail – Shortcut.doc (Windows 7).   If you don’t like those extra words, you can easily rename the shortcuts after they’re created, or you can configure Windows to never insert the extra words in the first place (see our article on how to do this). And of course, you can create shortcuts to folders too, not just to files! Bottom line: Whenever you have a file that you’d like to access from somewhere else (whether it’s convenience you’re after, or because the file simply belongs in two places), create a shortcut to the original file in the new location. Tip #10.  Separate Application Files from Data Files Any digital organization guru will drum this rule into you.  Application files are the components of the software you’ve installed (e.g. Microsoft Word, Adobe Photoshop or Internet Explorer).  Data files are the files that you’ve created for yourself using that software (e.g. Word Documents, digital photos, emails or playlists). Software gets installed, uninstalled and upgraded all the time.  Hopefully you always have the original installation media (or downloaded set-up file) kept somewhere safe, and can thus reinstall your software at any time.  This means that the software component files are of little importance.  Whereas the files you have created with that software is, by definition, important.  It’s a good rule to always separate unimportant files from important files. So when your software prompts you to save a file you’ve just created, take a moment and check out where it’s suggesting that you save the file.  If it’s suggesting that you save the file into the same folder as the software itself, then definitely don’t follow that suggestion.  File it in your own folder!  In fact, see if you can find the program’s configuration option that determines where files are saved by default (if it has one), and change it. Tip #11.  Organize Files Based on Purpose, Not on File Type If you have, for example a folder called Work\Clients\Johnson, and within that folder you have two sub-folders, Word Documents and Spreadsheets (in other words, you’re separating “.doc” files from “.xls” files), then chances are that you’re not optimally organized.  It makes little sense to organize your files based on the program that created them.  Instead, create your sub-folders based on the purpose of the file.  For example, it would make more sense to create sub-folders called Correspondence and Financials.  It may well be that all the files in a given sub-folder are of the same file-type, but this should be more of a coincidence and less of a design feature of your organization system. Tip #12.  Maintain the Same Folder Structure on All Your Computers In other words, whatever organizational system you create, apply it to every computer that you can.  There are several benefits to this: There’s less to remember.  No matter where you are, you always know where to look for your files If you copy or synchronize files from one computer to another, then setting up the synchronization job becomes very simple Shortcuts can be copied or moved from one computer to another with ease (assuming the original files are also copied/moved).  There’s no need to find the target of the shortcut all over again on the second computer Ditto for linked files (e.g Word documents that link to data in a separate Excel file), playlists, and any files that reference the exact file locations of other files. This applies even to the drive that your files are stored on.  If your files are stored on C: on one computer, make sure they’re stored on C: on all your computers.  Otherwise all your shortcuts, playlists and linked files will stop working! Tip #13.  Create an “Inbox” Folder Create yourself a folder where you store all files that you’re currently working on, or that you haven’t gotten around to filing yet.  You can think of this folder as your “to-do” list.  You can call it “Inbox” (making it the same metaphor as your email system), or “Work”, or “To-Do”, or “Scratch”, or whatever name makes sense to you.  It doesn’t matter what you call it – just make sure you have one! Once you have finished working on a file, you then move it from the “Inbox” to its correct location within your organizational structure. You may want to use your Desktop as this “Inbox” folder.  Rightly or wrongly, most people do.  It’s not a bad place to put such files, but be careful:  If you do decide that your Desktop represents your “to-do” list, then make sure that no other files find their way there.  In other words, make sure that your “Inbox”, wherever it is, Desktop or otherwise, is kept free of junk – stray files that don’t belong there. So where should you put this folder, which, almost by definition, lives outside the structure of the rest of your filing system?  Well, first and foremost, it has to be somewhere handy.  This will be one of your most-visited folders, so convenience is key.  Putting it on the Desktop is a great option – especially if you don’t have any other folders on your Desktop:  the folder then becomes supremely easy to find in Windows Explorer: You would then create shortcuts to this folder in convenient spots all over your computer (“Favorite Links”, “Quick Launch”, etc). Tip #14.  Ensure You have Only One “Inbox” Folder Once you’ve created your “Inbox” folder, don’t use any other folder location as your “to-do list”.  Throw every incoming or created file into the Inbox folder as you create/receive it.  This keeps the rest of your computer pristine and free of randomly created or downloaded junk.  The last thing you want to be doing is checking multiple folders to see all your current tasks and projects.  Gather them all together into one folder. Here are some tips to help ensure you only have one Inbox: Set the default “save” location of all your programs to this folder. Set the default “download” location for your browser to this folder. If this folder is not your desktop (recommended) then also see if you can make a point of not putting “to-do” files on your desktop.  This keeps your desktop uncluttered and Zen-like: (the Inbox folder is in the bottom-right corner) Tip #15.  Be Vigilant about Clearing Your “Inbox” Folder This is one of the keys to staying organized.  If you let your “Inbox” overflow (i.e. allow there to be more than, say, 30 files or folders in there), then you’re probably going to start feeling like you’re overwhelmed:  You’re not keeping up with your to-do list.  Once your Inbox gets beyond a certain point (around 30 files, studies have shown), then you’ll simply start to avoid it.  You may continue to put files in there, but you’ll be scared to look at it, fearing the “out of control” feeling that all overworked, chaotic or just plain disorganized people regularly feel. So, here’s what you can do: Visit your Inbox/to-do folder regularly (at least five times per day). Scan the folder regularly for files that you have completed working on and are ready for filing.  File them immediately. Make it a source of pride to keep the number of files in this folder as small as possible.  If you value peace of mind, then make the emptiness of this folder one of your highest (computer) priorities If you know that a particular file has been in the folder for more than, say, six weeks, then admit that you’re not actually going to get around to processing it, and move it to its final resting place. Tip #16.  File Everything Immediately, and Use Shortcuts for Your Active Projects As soon as you create, receive or download a new file, store it away in its “correct” folder immediately.  Then, whenever you need to work on it (possibly straight away), create a shortcut to it in your “Inbox” (“to-do”) folder or your desktop.  That way, all your files are always in their “correct” locations, yet you still have immediate, convenient access to your current, active files.  When you finish working on a file, simply delete the shortcut. Ideally, your “Inbox” folder – and your Desktop – should contain no actual files or folders.  They should simply contain shortcuts. Tip #17.  Use Directory Symbolic Links (or Junctions) to Maintain One Unified Folder Structure Using this tip, we can get around a potential hiccup that we can run into when creating our organizational structure – the issue of having more than one drive on our computer (C:, D:, etc).  We might have files we need to store on the D: drive for space reasons, and yet want to base our organized folder structure on the C: drive (or vice-versa). Your chosen organizational structure may dictate that all your files must be accessed from the C: drive (for example, the root folder of all your files may be something like C:\Files).  And yet you may still have a D: drive and wish to take advantage of the hundreds of spare Gigabytes that it offers.  Did you know that it’s actually possible to store your files on the D: drive and yet access them as if they were on the C: drive?  And no, we’re not talking about shortcuts here (although the concept is very similar). By using the shell command mklink, you can essentially take a folder that lives on one drive and create an alias for it on a different drive (you can do lots more than that with mklink – for a full rundown on this programs capabilities, see our dedicated article).  These aliases are called directory symbolic links (and used to be known as junctions).  You can think of them as “virtual” folders.  They function exactly like regular folders, except they’re physically located somewhere else. For example, you may decide that your entire D: drive contains your complete organizational file structure, but that you need to reference all those files as if they were on the C: drive, under C:\Files.  If that was the case you could create C:\Files as a directory symbolic link – a link to D:, as follows: mklink /d c:\files d:\ Or it may be that the only files you wish to store on the D: drive are your movie collection.  You could locate all your movie files in the root of your D: drive, and then link it to C:\Files\Media\Movies, as follows: mklink /d c:\files\media\movies d:\ (Needless to say, you must run these commands from a command prompt – click the Start button, type cmd and press Enter) Tip #18. Customize Your Folder Icons This is not strictly speaking an organizational tip, but having unique icons for each folder does allow you to more quickly visually identify which folder is which, and thus saves you time when you’re finding files.  An example is below (from my folder that contains all files downloaded from the Internet): To learn how to change your folder icons, please refer to our dedicated article on the subject. Tip #19.  Tidy Your Start Menu The Windows Start Menu is usually one of the messiest parts of any Windows computer.  Every program you install seems to adopt a completely different approach to placing icons in this menu.  Some simply put a single program icon.  Others create a folder based on the name of the software.  And others create a folder based on the name of the software manufacturer.  It’s chaos, and can make it hard to find the software you want to run. Thankfully we can avoid this chaos with useful operating system features like Quick Launch, the Superbar or pinned start menu items. Even so, it would make a lot of sense to get into the guts of the Start Menu itself and give it a good once-over.  All you really need to decide is how you’re going to organize your applications.  A structure based on the purpose of the application is an obvious candidate.  Below is an example of one such structure: In this structure, Utilities means software whose job it is to keep the computer itself running smoothly (configuration tools, backup software, Zip programs, etc).  Applications refers to any productivity software that doesn’t fit under the headings Multimedia, Graphics, Internet, etc. In case you’re not aware, every icon in your Start Menu is a shortcut and can be manipulated like any other shortcut (copied, moved, deleted, etc). With the Windows Start Menu (all version of Windows), Microsoft has decided that there be two parallel folder structures to store your Start Menu shortcuts.  One for you (the logged-in user of the computer) and one for all users of the computer.  Having two parallel structures can often be redundant:  If you are the only user of the computer, then having two parallel structures is totally redundant.  Even if you have several users that regularly log into the computer, most of your installed software will need to be made available to all users, and should thus be moved out of the “just you” version of the Start Menu and into the “all users” area. To take control of your Start Menu, so you can start organizing it, you’ll need to know how to access the actual folders and shortcut files that make up the Start Menu (both versions of it).  To find these folders and files, click the Start button and then right-click on the All Programs text (Windows XP users should right-click on the Start button itself): The Open option refers to the “just you” version of the Start Menu, while the Open All Users option refers to the “all users” version.  Click on the one you want to organize. A Windows Explorer window then opens with your chosen version of the Start Menu selected.  From there it’s easy.  Double-click on the Programs folder and you’ll see all your folders and shortcuts.  Now you can delete/rename/move until it’s just the way you want it. Note:  When you’re reorganizing your Start Menu, you may want to have two Explorer windows open at the same time – one showing the “just you” version and one showing the “all users” version.  You can drag-and-drop between the windows. Tip #20.  Keep Your Start Menu Tidy Once you have a perfectly organized Start Menu, try to be a little vigilant about keeping it that way.  Every time you install a new piece of software, the icons that get created will almost certainly violate your organizational structure. So to keep your Start Menu pristine and organized, make sure you do the following whenever you install a new piece of software: Check whether the software was installed into the “just you” area of the Start Menu, or the “all users” area, and then move it to the correct area. Remove all the unnecessary icons (like the “Read me” icon, the “Help” icon (you can always open the help from within the software itself when it’s running), the “Uninstall” icon, the link(s)to the manufacturer’s website, etc) Rename the main icon(s) of the software to something brief that makes sense to you.  For example, you might like to rename Microsoft Office Word 2010 to simply Word Move the icon(s) into the correct folder based on your Start Menu organizational structure And don’t forget:  when you uninstall a piece of software, the software’s uninstall routine is no longer going to be able to remove the software’s icon from the Start Menu (because you moved and/or renamed it), so you’ll need to remove that icon manually. Tip #21.  Tidy C:\ The root of your C: drive (C:\) is a common dumping ground for files and folders – both by the users of your computer and by the software that you install on your computer.  It can become a mess. There’s almost no software these days that requires itself to be installed in C:\.  99% of the time it can and should be installed into C:\Program Files.  And as for your own files, well, it’s clear that they can (and almost always should) be stored somewhere else. In an ideal world, your C:\ folder should look like this (on Windows 7): Note that there are some system files and folders in C:\ that are usually and deliberately “hidden” (such as the Windows virtual memory file pagefile.sys, the boot loader file bootmgr, and the System Volume Information folder).  Hiding these files and folders is a good idea, as they need to stay where they are and are almost never needed to be opened or even seen by you, the user.  Hiding them prevents you from accidentally messing with them, and enhances your sense of order and well-being when you look at your C: drive folder. Tip #22.  Tidy Your Desktop The Desktop is probably the most abused part of a Windows computer (from an organization point of view).  It usually serves as a dumping ground for all incoming files, as well as holding icons to oft-used applications, plus some regularly opened files and folders.  It often ends up becoming an uncontrolled mess.  See if you can avoid this.  Here’s why… Application icons (Word, Internet Explorer, etc) are often found on the Desktop, but it’s unlikely that this is the optimum place for them.  The “Quick Launch” bar (or the Superbar in Windows 7) is always visible and so represents a perfect location to put your icons.  You’ll only be able to see the icons on your Desktop when all your programs are minimized.  It might be time to get your application icons off your desktop… You may have decided that the Inbox/To-do folder on your computer (see tip #13, above) should be your Desktop.  If so, then enough said.  Simply be vigilant about clearing it and preventing it from being polluted by junk files (see tip #15, above).  On the other hand, if your Desktop is not acting as your “Inbox” folder, then there’s no reason for it to have any data files or folders on it at all, except perhaps a couple of shortcuts to often-opened files and folders (either ongoing or current projects).  Everything else should be moved to your “Inbox” folder. In an ideal world, it might look like this: Tip #23.  Move Permanent Items on Your Desktop Away from the Top-Left Corner When files/folders are dragged onto your desktop in a Windows Explorer window, or when shortcuts are created on your Desktop from Internet Explorer, those icons are always placed in the top-left corner – or as close as they can get.  If you have other files, folders or shortcuts that you keep on the Desktop permanently, then it’s a good idea to separate these permanent icons from the transient ones, so that you can quickly identify which ones the transients are.  An easy way to do this is to move all your permanent icons to the right-hand side of your Desktop.  That should keep them separated from incoming items. Tip #24.  Synchronize If you have more than one computer, you’ll almost certainly want to share files between them.  If the computers are permanently attached to the same local network, then there’s no need to store multiple copies of any one file or folder – shortcuts will suffice.  However, if the computers are not always on the same network, then you will at some point need to copy files between them.  For files that need to permanently live on both computers, the ideal way to do this is to synchronize the files, as opposed to simply copying them. We only have room here to write a brief summary of synchronization, not a full article.  In short, there are several different types of synchronization: Where the contents of one folder are accessible anywhere, such as with Dropbox Where the contents of any number of folders are accessible anywhere, such as with Windows Live Mesh Where any files or folders from anywhere on your computer are synchronized with exactly one other computer, such as with the Windows “Briefcase”, Microsoft SyncToy, or (much more powerful, yet still free) SyncBack from 2BrightSparks.  This only works when both computers are on the same local network, at least temporarily. A great advantage of synchronization solutions is that once you’ve got it configured the way you want it, then the sync process happens automatically, every time.  Click a button (or schedule it to happen automatically) and all your files are automagically put where they’re supposed to be. If you maintain the same file and folder structure on both computers, then you can also sync files depend upon the correct location of other files, like shortcuts, playlists and office documents that link to other office documents, and the synchronized files still work on the other computer! Tip #25.  Hide Files You Never Need to See If you have your files well organized, you will often be able to tell if a file is out of place just by glancing at the contents of a folder (for example, it should be pretty obvious if you look in a folder that contains all the MP3s from one music CD and see a Word document in there).  This is a good thing – it allows you to determine if there are files out of place with a quick glance.  Yet sometimes there are files in a folder that seem out of place but actually need to be there, such as the “folder art” JPEGs in music folders, and various files in the root of the C: drive.  If such files never need to be opened by you, then a good idea is to simply hide them.  Then, the next time you glance at the folder, you won’t have to remember whether that file was supposed to be there or not, because you won’t see it at all! To hide a file, simply right-click on it and choose Properties: Then simply tick the Hidden tick-box:   Tip #26.  Keep Every Setup File These days most software is downloaded from the Internet.  Whenever you download a piece of software, keep it.  You’ll never know when you need to reinstall the software. Further, keep with it an Internet shortcut that links back to the website where you originally downloaded it, in case you ever need to check for updates. See tip #33 below for a full description of the excellence of organizing your setup files. Tip #27.  Try to Minimize the Number of Folders that Contain Both Files and Sub-folders Some of the folders in your organizational structure will contain only files.  Others will contain only sub-folders.  And you will also have some folders that contain both files and sub-folders.  You will notice slight improvements in how long it takes you to locate a file if you try to avoid this third type of folder.  It’s not always possible, of course – you’ll always have some of these folders, but see if you can avoid it. One way of doing this is to take all the leftover files that didn’t end up getting stored in a sub-folder and create a special “Miscellaneous” or “Other” folder for them. Tip #28.  Starting a Filename with an Underscore Brings it to the Top of a List Further to the previous tip, if you name that “Miscellaneous” or “Other” folder in such a way that its name begins with an underscore “_”, then it will appear at the top of the list of files/folders. The screenshot below is an example of this.  Each folder in the list contains a set of digital photos.  The folder at the top of the list, _Misc, contains random photos that didn’t deserve their own dedicated folder: Tip #29.  Clean Up those CD-ROMs and (shudder!) Floppy Disks Have you got a pile of CD-ROMs stacked on a shelf of your office?  Old photos, or files you archived off onto CD-ROM (or even worse, floppy disks!) because you didn’t have enough disk space at the time?  In the meantime have you upgraded your computer and now have 500 Gigabytes of space you don’t know what to do with?  If so, isn’t it time you tidied up that stack of disks and filed them into your gorgeous new folder structure? So what are you waiting for?  Bite the bullet, copy them all back onto your computer, file them in their appropriate folders, and then back the whole lot up onto a shiny new 1000Gig external hard drive! Useful Folders to Create This next section suggests some useful folders that you might want to create within your folder structure.  I’ve personally found them to be indispensable. The first three are all about convenience – handy folders to create and then put somewhere that you can always access instantly.  For each one, it’s not so important where the actual folder is located, but it’s very important where you put the shortcut(s) to the folder.  You might want to locate the shortcuts: On your Desktop In your “Quick Launch” area (or pinned to your Windows 7 Superbar) In your Windows Explorer “Favorite Links” area Tip #30.  Create an “Inbox” (“To-Do”) Folder This has already been mentioned in depth (see tip #13), but we wanted to reiterate its importance here.  This folder contains all the recently created, received or downloaded files that you have not yet had a chance to file away properly, and it also may contain files that you have yet to process.  In effect, it becomes a sort of “to-do list”.  It doesn’t have to be called “Inbox” – you can call it whatever you want. Tip #31.  Create a Folder where Your Current Projects are Collected Rather than going hunting for them all the time, or dumping them all on your desktop, create a special folder where you put links (or work folders) for each of the projects you’re currently working on. You can locate this folder in your “Inbox” folder, on your desktop, or anywhere at all – just so long as there’s a way of getting to it quickly, such as putting a link to it in Windows Explorer’s “Favorite Links” area: Tip #32.  Create a Folder for Files and Folders that You Regularly Open You will always have a few files that you open regularly, whether it be a spreadsheet of your current accounts, or a favorite playlist.  These are not necessarily “current projects”, rather they’re simply files that you always find yourself opening.  Typically such files would be located on your desktop (or even better, shortcuts to those files).  Why not collect all such shortcuts together and put them in their own special folder? As with the “Current Projects” folder (above), you would want to locate that folder somewhere convenient.  Below is an example of a folder called “Quick links”, with about seven files (shortcuts) in it, that is accessible through the Windows Quick Launch bar: See tip #37 below for a full explanation of the power of the Quick Launch bar. Tip #33.  Create a “Set-ups” Folder A typical computer has dozens of applications installed on it.  For each piece of software, there are often many different pieces of information you need to keep track of, including: The original installation setup file(s).  This can be anything from a simple 100Kb setup.exe file you downloaded from a website, all the way up to a 4Gig ISO file that you copied from a DVD-ROM that you purchased. The home page of the software manufacturer (in case you need to look up something on their support pages, their forum or their online help) The page containing the download link for your actual file (in case you need to re-download it, or download an upgraded version) The serial number Your proof-of-purchase documentation Any other template files, plug-ins, themes, etc that also need to get installed For each piece of software, it’s a great idea to gather all of these files together and put them in a single folder.  The folder can be the name of the software (plus possibly a very brief description of what it’s for – in case you can’t remember what the software does based in its name).  Then you would gather all of these folders together into one place, and call it something like “Software” or “Setups”. If you have enough of these folders (I have several hundred, being a geek, collected over 20 years), then you may want to further categorize them.  My own categorization structure is based on “platform” (operating system): The last seven folders each represents one platform/operating system, while _Operating Systems contains set-up files for installing the operating systems themselves.  _Hardware contains ROMs for hardware I own, such as routers. Within the Windows folder (above), you can see the beginnings of the vast library of software I’ve compiled over the years: An example of a typical application folder looks like this: Tip #34.  Have a “Settings” Folder We all know that our documents are important.  So are our photos and music files.  We save all of these files into folders, and then locate them afterwards and double-click on them to open them.  But there are many files that are important to us that can’t be saved into folders, and then searched for and double-clicked later on.  These files certainly contain important information that we need, but are often created internally by an application, and saved wherever that application feels is appropriate. A good example of this is the “PST” file that Outlook creates for us and uses to store all our emails, contacts, appointments and so forth.  Another example would be the collection of Bookmarks that Firefox stores on your behalf. And yet another example would be the customized settings and configuration files of our all our software.  Granted, most Windows programs store their configuration in the Registry, but there are still many programs that use configuration files to store their settings. Imagine if you lost all of the above files!  And yet, when people are backing up their computers, they typically only back up the files they know about – those that are stored in the “My Documents” folder, etc.  If they had a hard disk failure or their computer was lost or stolen, their backup files would not include some of the most vital files they owned.  Also, when migrating to a new computer, it’s vital to ensure that these files make the journey. It can be a very useful idea to create yourself a folder to store all your “settings” – files that are important to you but which you never actually search for by name and double-click on to open them.  Otherwise, next time you go to set up a new computer just the way you want it, you’ll need to spend hours recreating the configuration of your previous computer! So how to we get our important files into this folder?  Well, we have a few options: Some programs (such as Outlook and its PST files) allow you to place these files wherever you want.  If you delve into the program’s options, you will find a setting somewhere that controls the location of the important settings files (or “personal storage” – PST – when it comes to Outlook) Some programs do not allow you to change such locations in any easy way, but if you get into the Registry, you can sometimes find a registry key that refers to the location of the file(s).  Simply move the file into your Settings folder and adjust the registry key to refer to the new location. Some programs stubbornly refuse to allow their settings files to be placed anywhere other then where they stipulate.  When faced with programs like these, you have three choices:  (1) You can ignore those files, (2) You can copy the files into your Settings folder (let’s face it – settings don’t change very often), or (3) you can use synchronization software, such as the Windows Briefcase, to make synchronized copies of all your files in your Settings folder.  All you then have to do is to remember to run your sync software periodically (perhaps just before you run your backup software!). There are some other things you may decide to locate inside this new “Settings” folder: Exports of registry keys (from the many applications that store their configurations in the Registry).  This is useful for backup purposes or for migrating to a new computer Notes you’ve made about all the specific customizations you have made to a particular piece of software (so that you’ll know how to do it all again on your next computer) Shortcuts to webpages that detail how to tweak certain aspects of your operating system or applications so they are just the way you like them (such as how to remove the words “Shortcut to” from the beginning of newly created shortcuts).  In other words, you’d want to create shortcuts to half the pages on the How-To Geek website! Here’s an example of a “Settings” folder: Windows Features that Help with Organization This section details some of the features of Microsoft Windows that are a boon to anyone hoping to stay optimally organized. Tip #35.  Use the “Favorite Links” Area to Access Oft-Used Folders Once you’ve created your great new filing system, work out which folders you access most regularly, or which serve as great starting points for locating the rest of the files in your folder structure, and then put links to those folders in your “Favorite Links” area of the left-hand side of the Windows Explorer window (simply called “Favorites” in Windows 7):   Some ideas for folders you might want to add there include: Your “Inbox” folder (or whatever you’ve called it) – most important! The base of your filing structure (e.g. C:\Files) A folder containing shortcuts to often-accessed folders on other computers around the network (shown above as Network Folders) A folder containing shortcuts to your current projects (unless that folder is in your “Inbox” folder) Getting folders into this area is very simple – just locate the folder you’re interested in and drag it there! Tip #36.  Customize the Places Bar in the File/Open and File/Save Boxes Consider the screenshot below: The highlighted icons (collectively known as the “Places Bar”) can be customized to refer to any folder location you want, allowing instant access to any part of your organizational structure. Note:  These File/Open and File/Save boxes have been superseded by new versions that use the Windows Vista/Windows 7 “Favorite Links”, but the older versions (shown above) are still used by a surprisingly large number of applications. The easiest way to customize these icons is to use the Group Policy Editor, but not everyone has access to this program.  If you do, open it up and navigate to: User Configuration > Administrative Templates > Windows Components > Windows Explorer > Common Open File Dialog If you don’t have access to the Group Policy Editor, then you’ll need to get into the Registry.  Navigate to: HKEY_CURRENT_USER \ Software \ Microsoft  \ Windows \ CurrentVersion \ Policies \ comdlg32 \ Placesbar It should then be easy to make the desired changes.  Log off and log on again to allow the changes to take effect. Tip #37.  Use the Quick Launch Bar as a Application and File Launcher That Quick Launch bar (to the right of the Start button) is a lot more useful than people give it credit for.  Most people simply have half a dozen icons in it, and use it to start just those programs.  But it can actually be used to instantly access just about anything in your filing system: For complete instructions on how to set this up, visit our dedicated article on this topic. Tip #38.  Put a Shortcut to Windows Explorer into Your Quick Launch Bar This is only necessary in Windows Vista and Windows XP.  The Microsoft boffins finally got wise and added it to the Windows 7 Superbar by default. Windows Explorer – the program used for managing your files and folders – is one of the most useful programs in Windows.  Anyone who considers themselves serious about being organized needs instant access to this program at any time.  A great place to create a shortcut to this program is in the Windows XP and Windows Vista “Quick Launch” bar: To get it there, locate it in your Start Menu (usually under “Accessories”) and then right-drag it down into your Quick Launch bar (and create a copy). Tip #39.  Customize the Starting Folder for Your Windows 7 Explorer Superbar Icon If you’re on Windows 7, your Superbar will include a Windows Explorer icon.  Clicking on the icon will launch Windows Explorer (of course), and will start you off in your “Libraries” folder.  Libraries may be fine as a starting point, but if you have created yourself an “Inbox” folder, then it would probably make more sense to start off in this folder every time you launch Windows Explorer. To change this default/starting folder location, then first right-click the Explorer icon in the Superbar, and then right-click Properties:Then, in Target field of the Windows Explorer Properties box that appears, type %windir%\explorer.exe followed by the path of the folder you wish to start in.  For example: %windir%\explorer.exe C:\Files If that folder happened to be on the Desktop (and called, say, “Inbox”), then you would use the following cleverness: %windir%\explorer.exe shell:desktop\Inbox Then click OK and test it out. Tip #40.  Ummmmm…. No, that’s it.  I can’t think of another one.  That’s all of the tips I can come up with.  I only created this one because 40 is such a nice round number… Case Study – An Organized PC To finish off the article, I have included a few screenshots of my (main) computer (running Vista).  The aim here is twofold: To give you a sense of what it looks like when the above, sometimes abstract, tips are applied to a real-life computer, and To offer some ideas about folders and structure that you may want to steal to use on your own PC. Let’s start with the C: drive itself.  Very minimal.  All my files are contained within C:\Files.  I’ll confine the rest of the case study to this folder: That folder contains the following: Mark: My personal files VC: My business (Virtual Creations, Australia) Others contains files created by friends and family Data contains files from the rest of the world (can be thought of as “public” files, usually downloaded from the Net) Settings is described above in tip #34 The Data folder contains the following sub-folders: Audio:  Radio plays, audio books, podcasts, etc Development:  Programmer and developer resources, sample source code, etc (see below) Humour:  Jokes, funnies (those emails that we all receive) Movies:  Downloaded and ripped movies (all legal, of course!), their scripts, DVD covers, etc. Music:  (see below) Setups:  Installation files for software (explained in full in tip #33) System:  (see below) TV:  Downloaded TV shows Writings:  Books, instruction manuals, etc (see below) The Music folder contains the following sub-folders: Album covers:  JPEG scans Guitar tabs:  Text files of guitar sheet music Lists:  e.g. “Top 1000 songs of all time” Lyrics:  Text files MIDI:  Electronic music files MP3 (representing 99% of the Music folder):  MP3s, either ripped from CDs or downloaded, sorted by artist/album name Music Video:  Video clips Sheet Music:  usually PDFs The Data\Writings folder contains the following sub-folders: (all pretty self-explanatory) The Data\Development folder contains the following sub-folders: Again, all pretty self-explanatory (if you’re a geek) The Data\System folder contains the following sub-folders: These are usually themes, plug-ins and other downloadable program-specific resources. The Mark folder contains the following sub-folders: From Others:  Usually letters that other people (friends, family, etc) have written to me For Others:  Letters and other things I have created for other people Green Book:  None of your business Playlists:  M3U files that I have compiled of my favorite songs (plus one M3U playlist file for every album I own) Writing:  Fiction, philosophy and other musings of mine Mark Docs:  Shortcut to C:\Users\Mark Settings:  Shortcut to C:\Files\Settings\Mark The Others folder contains the following sub-folders: The VC (Virtual Creations, my business – I develop websites) folder contains the following sub-folders: And again, all of those are pretty self-explanatory. Conclusion These tips have saved my sanity and helped keep me a productive geek, but what about you? What tips and tricks do you have to keep your files organized?  Please share them with us in the comments.  Come on, don’t be shy… Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Print or Create a Text File List of the Contents in a Directory the Easy WayCustomize the Windows 7 or Vista Send To MenuAdd Copy To / Move To on Windows 7 or Vista Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

< Previous Page | 359 360 361 362 363 364  | Next Page >