Search Results

Search found 5875 results on 235 pages for 'https'.

Page 37/235 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • DDD North 3 Presentation and source code &ndash; &lsquo;Event Store - an introduction to a DSD for event sourcing and notifications&rsquo;

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/15/ddd-north-3-presentation-and-source-code-ndash-lsquoevent-store.aspxThank you everyone at DDD North Thanks to all the people who helped organise the cracking conference that is DDD North 3, returning to Sunderland, and the great facilities at the University of Sunderland, and the fine drinks reception at Sunderland Software City.  The whole event wouldn’t be possible without the sponsors who ensured over 400 people were kept fed and watered so they could enjoy the impressive range of sessions. And lastly, a thank you to all those delegates who gave up their free time on a Saturday to spend a day dashing between lecture rooms, including a late change to my room which saw 40 people having to brave a journey between buildings in the fine drizzle. The enthusiasm from the delegates always helps recharge my geek batteries. Presentation and source code My presentation, source code, Event Store runners and text files containing the various command line parameters used for curl is now available on GitHub; https://github.com/westleyl/DDDNorth3-EventStore. Don’t worry if you don’t have a GitHub account, you don’t need one, you can just click on the Download Zip button on the right hand menu to download all the files as a single ZIP file.  If all you want is the PowerPoint presentation, go to https://github.com/westleyl/DDDNorth3-EventStore/blob/master/Powerpoint/DDDNorth-EventStore.pptx, and click on the View Raw button. Downloading and installing Event Store and Tools Download Event Store http://download.geteventstore.com – I unzipped these files into C:\EventStore\v2.0.1 Download Curl from http://curl.haxx.se/download.html – I downloaded Win64 Generic (with SSL) and unzipped these files into C:\curl version 7.31.0 Running the tools I used in my presentation Demonstration 1 (running Event Store) You can use one of my Event Store runner command files to run the single node version of Event Store, using default ports of 2213 for HTTP and 1113  for TCP, and with a wildcard HTTP pattern.  Both take a single command line parameter to specify the location of the data and log files.  The runners assume the single node executable is located in C:\EventStore\v2.0.1, and will placed data files and logs beneath C:\EventStore\Data, i.e. RunEventStore.cmd TestData1 This will create data files in C:\EventStore\Data\TestData1\Data and log files in C:\EventStore\Data\TestData1\logs. If, when running Event Store you may see the following message, [03288,15,06:23:00.622] Failed to start http server Access is denied You will either need to run Event Store in an administrator console window, or you can use the netsh command to create a firewall permission to allow HTTP listening (this will need to be run, once, in an administrator console window), netsh http add urlacl url=http://*:2213/ user=liam You can always delete this later by running the delete; netsh http delete urlacl url=http://*:2213/ If you want to confirm that everything is running OK, open the management console in a browser by navigating to http://127.0.0.1:2213. If at any point you are asked for a user name and password use the default of ‘admin’/‘changeit’.   Demonstration 2 (reading and adding data, curl) In my second demonstration I used curl directly from the console to read streams, write events and then read back those events. On GitHub I have included is a set of curl commands, CurlCommandLine.txt, and a sample data file, SampleData.json, to load an event into a DDDNorth3 stream. As there is not much data in the Event Store at this point I used the $stats-127.0.0.1:2113 which is a stream containing performance statistics for Event Store and is updated every 30 seconds (default). Demonstration 3 (projections) On GitHub I have included a sample projection, Projection-ByRoom.txt, which will create streams based on the room on which a session was held on the DDDNorth3 agenda. Browse to the management console, http://127.0.0.1:2213.  Click on Projections, New Projection, give it a name, Sessions-ByRoom, and copy in the JavaScript in the Projection-ByRoom.txt file.  Select Continuous, tick Emit Enabled and then click on Post. It should run immediately. You may by challenged for the administration login for the management console, if so use the default user name and password; 'admin'/'changeit'.   Demonstration 4 (C# client) The final demonstration was the Visual Studio 2012 project using the Event Store client – referenced directly as C:\EventStore\v2.0.1\EventStore.ClientAPI.dll, although you can switch this to the latest Event Store client NuGet package. The source code provides a console app for viewing projections with the projection manager (HTTP connection), as well as containing a full set of data for the entire DDDNorth3 agenda.  It also deals with the strategy for reading newest events backwards to older events and ignoring older events that have been superseded. Resources Event Store home page: http://www.geteventstore.com/ Event Store source code on GitHub: https://github.com/eventstore/eventstore Event Store documentation on GitHub: https://github.com/eventstore/eventstore/wiki (includes index to @RobAshton’s blog series on Event Store at https://github.com/eventstore/eventstore/wiki#rob-ashton---projections-series) Event Store forum in Google Groups: https://groups.google.com/forum/?fromgroups#!forum/event-store TopShelf Windows service wrapper is available on github: https://gist.github.com/trbngr/5083266

    Read the article

  • How to setup custom DNS with Azure Websites Preview?

    - by husainnz
    I created a new Azure Website, using Umbraco as the CMS. I got a page up and going, and I already have a .co.nz domain with www.domains4less.com. There's a whole lot of stuff on the internet about pointing URLs to Azure, but that seems to be more of a redirection service than anything (i.e. my URLs still use azurewebsites.net once I land on my site). Has anybody had any luck getting it to go? Here's the error I get when I try adding the DNS entry to Azure (I'm in reserved mode, reemdairy is the name of the website): There was an error processing your request. Please try again in a few moments. Browser: 5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5 User language: undefined Portal Version: 6.0.6002.18488 (rd_auxportal_stable.120609-0259) Subscriptions: 3aabe358-d178-4790-a97b-ffba902b2851 User email address: [email protected] Last 10 Requests message: Failure: Ajax call to: Websites/UpdateConfig. failed with status: error (500) in 2.57 seconds. x-ms-client-request-id was: 38834edf-c9f3-46bb-a1f7-b2839c692bcf-2012-06-12 22:25:14Z dateTime: Wed Jun 13 2012 10:25:17 GMT+1200 (New Zealand Standard Time) durationSeconds: 2.57 url: Websites/UpdateConfig status: 500 textStatus: error clientMsRequestId: 38834edf-c9f3-46bb-a1f7-b2839c692bcf-2012-06-12 22:25:14Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com response: {"message":"Try again. Contact support if the problem persists.","ErrorMessage":"Try again. Contact support if the problem persists.","httpStatusCode":"InternalServerError","operationTrackingId":"","stackTrace":null} message: Complete: Ajax call to: Websites/GetConfig. completed with status: success (200) in 1.021 seconds. x-ms-client-request-id was: a0cdcced-13d0-44e2-866d-e0b061b9461b-2012-06-12 22:24:43Z dateTime: Wed Jun 13 2012 10:24:44 GMT+1200 (New Zealand Standard Time) durationSeconds: 1.021 url: Websites/GetConfig status: 200 textStatus: success clientMsRequestId: a0cdcced-13d0-44e2-866d-e0b061b9461b-2012-06-12 22:24:43Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: https://manage.windowsazure.com/Service/OperationTracking?subscriptionId=3aabe358-d178-4790-a97b-ffba902b2851. completed with status: success (200) in 1.887 seconds. x-ms-client-request-id was: a7689fe9-b9f9-4d6c-8926-734ec9a0b515-2012-06-12 22:24:40Z dateTime: Wed Jun 13 2012 10:24:42 GMT+1200 (New Zealand Standard Time) durationSeconds: 1.887 url: https://manage.windowsazure.com/Service/OperationTracking?subscriptionId=3aabe358-d178-4790-a97b-ffba902b2851 status: 200 textStatus: success clientMsRequestId: a7689fe9-b9f9-4d6c-8926-734ec9a0b515-2012-06-12 22:24:40Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: /Service/GetUserSettings. completed with status: success (200) in 0.941 seconds. x-ms-client-request-id was: 805e554d-1e2e-4214-afd5-be87c0f255d1-2012-06-12 22:24:40Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.941 url: /Service/GetUserSettings status: 200 textStatus: success clientMsRequestId: 805e554d-1e2e-4214-afd5-be87c0f255d1-2012-06-12 22:24:40Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/ClusterSuffix. completed with status: success (200) in 0.483 seconds. x-ms-client-request-id was: 85157ceb-c538-40ca-8c1e-5cc07c57240f-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.483 url: Extensions/ApplicationsExtension/SqlAzure/ClusterSuffix status: 200 textStatus: success clientMsRequestId: 85157ceb-c538-40ca-8c1e-5cc07c57240f-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/GetClientIp. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: 2eb194b6-66ca-49e2-9016-e0f89164314c-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/GetClientIp status: 200 textStatus: success clientMsRequestId: 2eb194b6-66ca-49e2-9016-e0f89164314c-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/DefaultServerLocation. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: 1bc165ef-2081-48f2-baed-16c6edf8ea67-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/DefaultServerLocation status: 200 textStatus: success clientMsRequestId: 1bc165ef-2081-48f2-baed-16c6edf8ea67-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com message: Complete: Ajax call to: Extensions/ApplicationsExtension/SqlAzure/ServerLocations. completed with status: success (200) in 0.309 seconds. x-ms-client-request-id was: e1fba7df-6a12-47f8-9434-bf17ca7d93f4-2012-06-12 22:24:39Z dateTime: Wed Jun 13 2012 10:24:40 GMT+1200 (New Zealand Standard Time) durationSeconds: 0.309 url: Extensions/ApplicationsExtension/SqlAzure/ServerLocations status: 200 textStatus: success clientMsRequestId: e1fba7df-6a12-47f8-9434-bf17ca7d93f4-2012-06-12 22:24:39Z sessionId: 09c72263-6ce7-422b-84d7-4c21acded759 referrer: https://manage.windowsazure.com/#Workspaces/WebsiteExtension/Website/reemdairy/configure host: manage.windowsazure.com

    Read the article

  • Accessing cookies in php which are set in java web application

    - by user866937
    I am setting a cookie at the domain level on encrypted connection in Java web application running on tomcat and I would like it to be accessible in a php web application running on a same domain but different subdomain. Java web application running on https://javaapp.mycompany.com php web application running on https://phpapp.mycompany.com/subpath/index.php From java, I am setting the cookie with following parameters: Domain: .mycompany.com Send For: Encrypted connections only Expires: After 2 months Path: /subpath Name: __C Value: 1 Dumping all the cookies from my php web application running on https, I do not see any cookies getting dumped by the php web application. Whereas if I set the cookies in Java for any type of connection, only then php web application is able to see them if I run the php app on http instead of https. I believe php web app should be able to retrieve then cookies set for https only and for particular domain and all immediate sub-domains. What am I doing wrong here? Thanks in advance for the help.

    Read the article

  • Tomcat SSL Configuration

    - by bdares
    I received a SSL cert to use for a Tomcat 6.0 server, ready to use. I configured Tomcat to use it with the following in server.xml: <Connector port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="C:\Tomcat 6.0\ssl\cert" keystorePass="*****" clientAuth="false" sslProtocol="TLS"/> I started Tomcat using the command prompt so I could see any error message as they happened. There were none. The results for accessing different URLS: http://localhost - normal page loads fine https://localhost - browser claims page cannot be found https://localhost:8443 - page cannot be found http://localhost:8443 - offers a certificate, after accepted redirects to https://localhost (I suspect the https:// urls initially offer the certificate which is automatically accepted by the browser, as it was issued by Verisign) How to fix? Edit: I've also tried port="443". Same result.

    Read the article

  • Hudson: how do i use a parameterized build to do svn checkout and svn tag?

    - by Derick Bailey
    I'm setting up a parameterized build in hudson v1.362. the parameter i'm creting is used to determine which branch to checkout in subversion. I can set my svn repository url like this: https://my.svn.server/branches/${branch} and it does the checkout and the build just fine. now I want to tag the build after it finishes. i'm using the SVN tagging plugin for hudson to do this. so i go to the bottom of the project config screen for the hudson project and turn on "Perform Subversion tagging on successful build". here, i set my Tag Base URL to https://my.svn.server/tags/${branch}-${BUILD_NUMBER} and it gives me errors about those properties not being found. so i change them to environment variable usages like this: https://my.svn.server/tags/${env['branch']}-${env['BUILD_NUMBER']} and the svn tagging plugin is happy. the problem now is that my svn repository at the top is using the ${branch} syntax and the svn tagging plugin barfs on this: moduleLocation: Remote -https://my.svn.server/branches/$branch/ Tag Base URL: 'https://my.svn.server/tags/thebranchiused-1234'. There was no old tag at https://my.svn.server/tags/thebranchiused-1234. ERROR: Publisher hudson.plugins.svn_tag.SvnTagPublisher aborted due to exception java.lang.NullPointerException at hudson.plugins.svn_tag.SvnTagPlugin.perform(SvnTagPlugin.java:180) at hudson.plugins.svn_tag.SvnTagPublisher.perform(SvnTagPublisher.java:79) at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:580) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:558) at hudson.model.Build$RunnerImpl.cleanUp(Build.java:167) at hudson.model.Run.run(Run.java:1295) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) Finished: FAILURE notice the first line, there: the svn tag is looking at ${branch} as part of the repository url... it's not parsing out the property value. i tried to change my original Repository URL for svn to use the ${env['branch']} syntax, but that blows up on the original checkout because this syntax is not getting parsed at all by the checkout. help?! how do i use a parameterized build to set the svn url for checkout and for tagging my build?!

    Read the article

  • Reading Data from DDFS ValueError: No JSON object could be decoded

    - by secumind
    I'm running dozens of map reduce jobs for a number of different purposes using disco. My data has grown enormous and I thought I would try using DDFS for a change rather than standard txt files. I've followed the DISCO map/reduce example Counting Words as a map/reduce job, without to much difficulty and with the help of others, Reading JSON specific data into DISCO I've gotten past one of my latest problems. I'm trying to read data in/out of ddfs to better chunk and distribute it but am having a bit of trouble. Here's an example file: file.txt {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a>"} I load it into DDFS with: # ddfs chunk data:test1 ./file.txt created: disco://localhost/ddfs/vol0/blob/44/file_txt-0$549-db27b-125e1 I test that the file is indeed loaded into ddfs with: # ddfs xcat data:test1 {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a> At this point everything is great, I load up the script that resulted from a previous Stack Post: from disco.core import Job, result_iterator import gzip def map(line, params): import unicodedata import json r = json.loads(line).get('text') s = unicodedata.normalize('NFD', r).encode('ascii', 'ignore') for word in s.split(): yield word, 1 def reduce(iter, params): from disco.util import kvgroup for word, counts in kvgroup(sorted(iter)): yield word, sum(counts) if __name__ == '__main__': job = Job().run(input=["tag://data:test1"], map=map, reduce=reduce) for word, count in result_iterator(job.wait(show=True)): print word, count NOTE: That this script runs file if the input=["file.txt"], however when I run it with "tag://data:test1" I get the following error: # DISCO_EVENTS=1 python count_normal_words.py Job@549:db30e:25bd8: Status: [map] 0 waiting, 1 running, 0 done, 0 failed 2012/11/25 21:43:26 master New job initialized! 2012/11/25 21:43:26 master Starting job 2012/11/25 21:43:26 master Starting map phase 2012/11/25 21:43:26 master map:0 assigned to solice 2012/11/25 21:43:26 master ERROR: Job failed: Worker at 'solice' died: Traceback (most recent call last): File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 329, in main job.worker.start(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 290, in start self.run(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 286, in run getattr(self, task.mode)(task, params) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 299, in map for key, val in self['map'](entry, params): File "count_normal_words.py", line 12, in map File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2012/11/25 21:43:26 master WARN: Job killed Status: [map] 1 waiting, 0 running, 0 done, 1 failed Traceback (most recent call last): File "count_normal_words.py", line 28, in <module> for word, count in result_iterator(job.wait(show=True)): File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 348, in wait timeout, poll_interval * 1000) File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 309, in check_results raise JobError(Job(name=jobname, master=self), "Status %s" % status) disco.error.JobError: Job Job@549:db30e:25bd8 failed: Status dead The Error states: ValueError: No JSON object could be decoded. Again, this works fine using the text file as input but now DDFS. Any ideas, I'm open to suggestions?

    Read the article

  • How can I refactor these script tags?

    - by Shpigford
    I have the following script tags in the <head> so that they don't prompt any security errors when going back and forth between SSL and non-SSL pages. But it just looks hairy. Any way I can combine them or reduce some of the code? <script type="text/javascript">document.write(["\<script src='",("https:" == document.location.protocol) ? "https://" : "http://","ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js' type='text/javascript'>\<\/script>"].join(''));</script> <script type="text/javascript">document.write(["\<script src='",("https:" == document.location.protocol) ? "https://" : "http://","html5shiv.googlecode.com/svn/trunk/html5.js' type='text/javascript'>\<\/script>"].join(''));</script> <script type="text/javascript">document.write(["\<script src='",("https:" == document.location.protocol) ? "https://" : "http://","use.typekit.com/12345.js' type='text/javascript'>\<\/script>"].join(''));</script>

    Read the article

  • Conditional compilation hackery in C# - is there a way to pull this off?

    - by Chris
    I have an internal API that I would like others to reference in their projects as a compiled DLL. When it's a standalone project that's referenced, I use conditional compilation (#if statements) to switch behavior of a key web service class depending on compilation symbols. The problem is, once an assembly is generated, it appears that it's locked into whatever the compilation symbols were when it was originally compiled - for instance, if this assembly is compiled with DEBUG and is referenced by another project, even if the other project is built as RELEASE, the assembly still acts as if it was in DEBUG as it doesn't need recompilation. That makes sense, just giving some background. Now I'm trying to work around that so I can switch the assembly's behavior by some other means, such as scanning the app/web config file for a switch. The problem is, some of the assembly's code I was switching between are attributes on methods, for example: #if PRODUCTION [SoapDocumentMethodAttribute("https://prodServer/Service_Test", RequestNamespace = "https://prodServer", ResponseNamespace = "https://prodServer")] #else [SoapDocumentMethodAttribute("https://devServer/Service_Test", RequestNamespace = "https://devServer", ResponseNamespace = "https://devServer")] #endif public string Service_Test() { // test service } Though there might be some syntactical sugar that allows me to flip between two attributes of the same type in another fashion, I don't know it. Any ideas? The alternative method would be to reference the entire project instead of the assembly, but I'd rather stick with just referencing the compiled DLL if I can. I'm also completely open to a whole new approach to solve the problem if that's what it takes.

    Read the article

  • How to correctly track the analytics when using iframe

    - by Sherry Ann Hernandez
    In our main aspx page we have this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'florahospitality.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); _gaq.push(function() { var pageTracker = _gat._getTrackerByName(); var iframe = document.getElementById('reservationFrame'); iframe.src = pageTracker._getLinkerUrl('https://reservations.synxis.com/xbe/rez.aspx?Hotel=15159&template=flex&shell=flex&Chain=5375&locale=en&arrive=11/12/2012&depart=11/13/2012&adult=2&child=0&rooms=1&start=availresults&iata=&promo=&group='); }); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Then inside this aspx page is an iframe. Inside the iframe we setup this analytics code <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-1301114-2']); _gaq.push(['_setDomainName', 'reservations.synxis.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview', 'AvailabilityResults']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The problem is I see to pageview when I go to find the AvailabilityResults page. The first one is a direct traffic and the other one is a cpc. How come that they have different source? I was expecting that both of them is using a direct traffic.

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • How to Apache SSL proxy to openerp 7 running in VM?

    - by Johnbritto
    I have installed openerp v7 in an ubuntu 12.04 Virtual machine from launchpad.i.e server, web, addons. I configured SSL reverse proxy on virtual machine and my configuration for virtual host *:443 are ServerName openerp.mydomain.net ServerAdmin openerp@localhost SSLEngine on SSLCertificateFile /etc/ssl/openerp/server.crt SSLCertificateKeyFile /etc/ssl/openerp/server.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://172.16.150.14:8069/ ProxyPassReverse / http://172.16.150.14:8069/ RequestHeader set "X-Forwarded-Proto" "https" # Fix IE problem (httpapache proxy dav error 408/409) SetEnv proxy-nokeepalive 1 </VirtualHost> on host, I have configured apache reverse proxy for my subdomain in vhost_ssl.conf as SSLEngine On SSLProxyEngine On ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://172.16.150.14/ ProxyPassReverse / https://172.16.150.14/ SetEnv proxy-nokeepalive 1 <Location /> Order allow,deny Allow from all </Location> I have set 172.16.150.14 on netrpc and xmlrcs interfaces in openerp-server.conf. Now, when I access https:// openerp.mydomain.net from Girefox and chrome browser..I get http:// openerp.mydomain.net%2C%20openerp.mydomain.net/?db=testingdb which makes 404. But when i access URL from IE 9, the URL https:// openerp.mydomain.net works ok .. secondly if i change the parameter list_db= false, then the links works as expected.. Kindly let me know what is creating bottleneck with URL redirect to http://openerp.mydomain.net, openerp.myydomain.net/?db=testdb on Firefox and chrome. i am struck here doing troubleshooting with the URL to work.

    Read the article

  • Configure J2EE Agent with OpenAM behind Reverse Proxy

    - by Troy
    I have a reverse proxy with two SSL enabled NamedVirtualHosts on different ports. Both containers on each internal host is GF 2.1.1. Proxy configuration as follows: Proxy URL -> Internal URL https://apps.mydomain.com -> http://apps.internal.com https://secure.otherdomain.com:8080/ -> http://secure.internal.com I initially tried configuring the J2EE agent in OpenAM and the web app container to use the internal URLs (I appended /openam and /agentapp respectively). However, I received the following errors when trying to access a secured application such as https://apps.mydomain.com/webapp. java.lang.RuntimeException: Failed to load configuration: ApplicationSSOTokenProvider.getApplicationSSOToken(): Unable to get Application SSO Token A second attempt gives the following error: java.lang.NoClassDefFoundError: Could not initialize class com.sun.identity.agents.filter.AmFilterManager Along with these in the agent debug.out: ERROR: Failed to obtain auth service url from server: null://null:null ... SiteMonitor: Site URL http://secure.internal.com/openam/namingservice is not available. If I specify the server and agent urls using the proxy urls, then the agent appears to be working and I am redirected to the OpenAM login page. However, the goto in the URL is http://apps.mydomain.com/webapp instead of https://apps.mydomain.com/webapp (missing https). So after authentication, the redirect fails. Now I could possibly get by with mod_rewrite, but it feels hackish and I really want to know what's going on. Any ideas?

    Read the article

  • ISPConfig 3 SSL automatic rewrite

    - by lol
    I was wondering how you could get apache2 to redirect http://server.com:8080 to https://server.com:8080 - I have an ISPConfig 3 setup and the http://server.com:8080 virtual host currently prints a 400 back request error given that I've tried adding RewriteEngine on RewriteCond %{HTTPS} !^on$ [NC] RewriteRule . https://%{HTTP_HOST}:8080%{REQUEST_URI} [L] to the ispconfig.vhost file (and reloading the conf) with no success --edit!-- I've been playing around with it and adding an 'always redirect to google' into the ispconfig vhost and it works once you've already started talking ssl to it. this means the non-ssl connections are getting 'bad request errors' before the vhost is loaded... but where...? --edit 2!-- nope, the ssl is handled exclusively by the virtual host - if I turn off the ssl engine then the rewriting works perfectly (but obviously there is no ssl at https://) thanks!

    Read the article

  • jump to page of a pdf in google docs / drive / apps

    - by Aaron - Solution Evangelist
    i want to jump to a specific page of a pdf file via the google docs via the editor url https://docs.google.com/file/d/xxx/edit or the embed url https://docs.google.com/file/d/xxx/preview i am not looking to use the http://docs.google.com/gview?url= referenced in the stackoverflow question how to open specific page on Google's docs viewer as i want to do this for documents where authentication is required the the document is not available via public url. is there some way of appending an anchor (i would have expected it to be https://docs.google.com/file/d/xxx/preview#10) or a query (e.g. https://docs.google.com/file/d/xxx/preview?page=10) to the google docs / drive / apps viewer?

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • Links not using FQDN on Sharepoint Mysite from an external access

    - by Busted Keaton
    Hello all I've configured external access to some sharepoint applications, including MySites, using AAM and ISA configuration. Every seems working well, but when using the external access (ie via https), some links are not working because they use the internal name (http://mysite) instead of the FQDN via https (https://mysite.mydomain.fr*) Any hint or suggestion are welcome. *yes, i'm french. =) EDIT : examples of links that are not working : - when clicking on a folder in a library - when clicking on "My links" then "My sharepoint sites" and then clicking on one of the links displayed

    Read the article

  • Apache MatchRedirect exception regex

    - by Arash Mousavi
    I want to redirect any URL that is Https and hasn't start with "system_" to the same URL with http. for exapmle for this url : https://exsite.tld/some/thing/that/not/start/with/pattern to : http://exsite.tld/some/thing/that/not/start/with/pattern but this url: https://exsite.tld/system_aas3f4 Shouldn't redirect. I try: RedirectMatch ^/?((?!(system_)).*) http://exsite.tld/$1 but it won't work. I don't know what's the problem.

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • Need help making site available externally

    - by White Island
    I'm trying to open a hole in the firewall (ASA 5505, v8.2) to allow external access to a Web application. Via ASDM (6.3?), I've added the server as a Public Server, which creates a static NAT entry [I'm using the public IP that is assigned to 'dynamic NAT--outgoing' for the LAN, after confirming on the Cisco forums that it wouldn't bring everyone's access crashing down] and an incoming rule "any... public_ip... https... allow" but traffic is still not getting through. When I look at the log viewer, it says it's denied by access-group outside_access_in, implicit rule, which is "any any ip deny" I haven't had much experience with Cisco management. I can't see what I'm missing to allow this connection through, and I'm wondering if there's anything else special I have to add. I tried adding a rule (several variations) within that access-group to allow https to the server, but it never made a difference. Maybe I haven't found the right combination? :P I also made sure the Windows firewall is open on port 443, although I'm pretty sure the current problem is Cisco, because of the logs. :) Any ideas? If you need more information, please let me know. Thanks Edit: First of all, I had this backward. (Sorry) Traffic is being blocked by access-group "inside_access_out" which is what confused me in the first place. I guess I confused myself again in the midst of typing the question. Here, I believe, is the pertinent information. Please let me know what you see wrong. access-list acl_in extended permit tcp any host PUBLIC_IP eq https access-list acl_in extended permit icmp CS_WAN_IPs 255.255.255.240 any access-list acl_in remark Allow Vendor connections to LAN access-list acl_in extended permit tcp host Vendor any object-group RemoteDesktop access-list acl_in remark NetworkScanner scan-to-email incoming (from smtp.mail.microsoftonline.com to PCs) access-list acl_in extended permit object-group TCPUDP any object-group Scan-to-email host NetworkScanner object-group Scan-to-email access-list acl_out extended permit icmp any any access-list acl_out extended permit tcp any any access-list acl_out extended permit udp any any access-list SSLVPNSplitTunnel standard permit LAN_Subnet 255.255.255.0 access-list nonat extended permit ip VPN_Subnet 255.255.255.0 LAN_Subnet 255.255.255.0 access-list nonat extended permit ip LAN_Subnet 255.255.255.0 VPN_Subnet 255.255.255.0 access-list inside_access_out remark NetworkScanner Scan-to-email outgoing (from scanner to Internet) access-list inside_access_out extended permit object-group TCPUDP host NetworkScanner object-group Scan-to-email any object-group Scan-to-email access-list inside_access_out extended permit tcp any interface outside eq https static (inside,outside) PUBLIC_IP LOCAL_IP[server object] netmask 255.255.255.255 I wasn't sure if I needed to reverse that "static" entry, since I got my question mixed up... and also with that last access-list entry, I tried interface inside and outside - neither proved successful... and I wasn't sure about whether it should be www, since the site is running on https. I assumed it should only be https.

    Read the article

  • phpmyadmin login redirect fails with custom ssl port

    - by baraboom
    The server is running Ubuntu 10.10, Apache 2.2.16, PHP 5.3.3-1ubuntu9.3, phpMyAdmin 3.3.7deb5build0.10.10.1. Since this same server is also running Zimbra on port 443, I've configured apache to serve SSL on port 81. So far, I have one CMS script running on this virtual host successfully. However, when I access /phpmyadmin (set up with the default alias) on my custom ssl port and submit the login form, I am redirected to http://vhost.domain.com:81/index.php?TOKEN=foo (note the http:// instead of the https:// that the login url was using). This generates an Error 400 Bad Request complaining about "speaking plain HTTP to an SSL-enabled server port." I can then manually change the http:// to https:// in the URL and use phpmyadmin as expected. I was annoyed enough to spend an hour trying to fix it and now even more annoyed that I cannot figure it out. I've tried various things, including: Adding $cfg['PmaAbsoluteUri'] = 'https://vhost.domain.com:81/phpmyadmin/'; to the /usr/share/phpmyadmin/config.inc.php file but this did not correct the problem (even though /usr/share/phpmyadmin/libraries/auth/cookie.auth.lib.php looks like it should honor it and use it as the redirect). Adding $cfg['ForceSSL'] = 1; to the same config.inc.php but then apache spirals into an infinite redirect. Adding a rewrite rule to the vhost-ssl conf file in apache but I was unable to figure out the condition to use when http:// was present along with the correct ssl port of :81. Lots of googling. Here are the relevant Apache configuration pieces: /etc/apache2/ports.conf <IfModule mod_ssl.c> NameVirtualHost *:81 Listen 81 </IfModule> /etc/apache2/sites-enabled/vhost-nonssl <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} </Virtualhost> /etc/apache2/sites-enabled/vhost-ssl <VirtualHost *:81> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Vhost" AuthUserFile /home/xxx/sites/vhost/.users Require valid-user </Directory> <Directory /home/xxx/sites/vhost/html/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> /etc/apache2/conf.d/phpmyadmin.conf Alias /phpmyadmin /usr/share/phpmyadmin (The rest of the default .conf truncated.) Everything in the apache config seems to work ok - the rewrite from non-ssl to ssl, the http authentication, the problem only happens when I am submitting the login form for phpmyadmin from https://vhost.domain.com:81/index.php. Other configs: The phpmyadmin config is completely default and the php.ini has only had some minor changes to memory and timeout limits. These seem to work fine, as mentioned, another php script runs with no problem and phpmyadmin works great once I manually enter in the correct schema after login. I'm looking for either a bandaid I can add to save me the trouble of manually entering in the https:// after login, a real fix that will make phpmyadmin behave as I think it should or some greater understanding of why my desired config is not possible.

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • Apache NameVirtualHost on port 443 ignores ServerAlias

    - by Ryan
    I've got a name-based virtual host setup on port 443 such that requests on host 'apple.fruitdomain' are proxied to the apple-app and requests on host 'orange.fruitdomain' are proxied to orange-app. This is working, but I'd like to add a ServerAlias for each such that requests on host 'apple' are proxied to apple-app and requests on host 'orange' are proxied to the orange-app. If I simply add a ServerAlias directive to the virtual host it doesn't work. ssl.conf below: Listen 443 NameVirtualHost *:443 <VirtualHost *:443> ServerName apple.fruitdomain ServerAlias apple SSLProxyEngine on ProxyPass /apple-app https://localhost:8181/apple-app ProxyPassReverse /apple-app https://localhost:8181/apple-app ... </VirtualHost> <VirtualHost *:443> ServerName orange.fruitdomain ServerAlias orange SSLProxyEngine on ProxyPass /orange-app https://localhost:8181/orange-app ProxyPassReverse /orange-app https://localhost:8181/orange-app ... </VirtualHost> Interestingly if I do a similar setup but with port 80 then the ServerAlias works...

    Read the article

  • Links not using FQDN on Sharepoint Mysite from an external access

    - by user30934
    I've configured external access to some sharepoint applications, including MySites, using AAM and ISA configuration. Every seems working well, but when using the external access (ie via https), some links are not working because they use the internal name (http://mysite) instead of the FQDN via https (https://mysite.mydomain.fr*) Any hint or suggestion are welcome. *yes, i'm french. =) EDIT : examples of links that are not working : - when clicking on a folder in a library - when clicking on "My links" then "My sharepoint sites" and then clicking on one of the links displayed

    Read the article

  • SVN checkout returns 400 error

    - by eboix
    I'm trying to download the http://code.opencv.org/svn/opencv/trunk/ repository of all of the OpenCV source code - as specified in an OpenCV installation tutorial. In the tutorial, the repository https://code.ros.org/svn/opencv/trunk/ is used, but they moved it to http://code.opencv.org/svn/opencv/trunk/, and now you need a password to access the code.ros.org repository. Anyway, I'm using TortoiseSVN to download the SVN repository. (I get the same error with http://sourceforge.net/projects/win32svn/) I get this: Checkout from http://code.opencv.org/svn/opencv/trunk, revision HEAD, Fully recursive, Externals included Server sent unexpected return value (400 Bad request. Method Unknown) in response to REPORT request for '/svn/opencv/!svn/vcc/default' On the TortoiseSVN site I found something about this 400 error: You're behind a firewall which blocks DAV requests. Most firewalls do that. Either ask your Administrator to change the firewall, or access the repository with https:// instead of http:// like in https://svn.collab.net/repos/svn/ That way you connect to the repository with SSL encryption, which firewalls can't interfere with (if they don't block the SSL port completely). Also some virus scanners (i.e. Kapersky) are known to interfere and cause this error. The code.ros.org repository is https://, so I would be able to access it, but I need a password, so I can't. I made an account on ros.org, but it seems that I still need a password (which I don't know) to access the code repository. My username-password combination does not work. I unblocked all of the TortoiseSVN programs in my firewall settings. Nothing changed. I temporarily stopped my firewall to see if it was interfering with my request. I got the same error. How can I do an svn checkout http://code.opencv.org/svn/opencv/trunk/opencv/ so that I don't get this error? Is there any way to make it https://? Any help would be appreciated!

    Read the article

  • RewriteRule Works With "Match Everything" Pattern But Not Directory Pattern

    - by kgrote
    I'm trying to redirect newsletter URLs from my local server to an Amazon S3 bucket. So I want to redirect from: https://mysite.com/assets/img/newsletter/Jan12_Newsletter.html to: https://s3.amazonaws.com/mybucket/newsletters/legacy/Jan12_Newsletter.html Here's the first part of my rule: RewriteEngine On RewriteBase / # Is it in the newsletters directory RewriteCond %{REQUEST_URI} ^(/assets/img/newsletter/)(.+) [NC] # Is not a 2008-2011 newsletter RewriteCond %{REQUEST_URI} !(.+)(11|10|09|08)_Newsletter.html$ [NC] ## -> RewriteRule to S3 Here <- ## If I use this RewriteRule to point to the new subdirectory on S3 it will NOT redirect: RewriteRule ^(/assets/img/newsletter/)(.+) https://s3.amazonaws.com/mybucket/newsletters/legacy/$2 [R=301,L] However if I use a blanket expression to capture the entire file path it WILL redirect: RewriteRule ^(.*)$ https://s3.amazonaws.com/mybucket/newsletters/legacy/$1 [R=301,L] Why does it only work with a "match everything" expression but not a more specific expression?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >