Search Results

Search found 55652 results on 2227 pages for 'http response'.

Page 222/2227 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • SQL Saturday #220 - Atlanta - Pre-Con Scholarship Winners!

    - by Most Valuable Yak (Rob Volk)
    A few weeks ago, AtlantaMDF offered scholarships for each of our upcoming Pre-conference sessions at SQL Saturday #220. We would like to congratulate the winners! David Thomas SQL Server Security http://sqlsecurity.eventbrite.com/ Vince Bible Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Mostafa Maged Languages of BI http://languagesofbi.eventbrite.com/ Daphne Adams Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Tim Lawrence The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ Thanks to everyone who applied! And once again we must thank Idera's generous sponsorship, and the time and effort made by Bobby Dimmick (w|t) and Brian Kelley (w|t) of Midlands PASS for judging all the applicants. Don't forget, there's still time to attend the Pre-Cons on May 17, 2013! Click on the EventBrite links for more details and to register!

    Read the article

  • MySQL Connector/Net 6.8.0 alpha has been released

    - by Roberto Garcia
    Dear MySQL users, MySQL Connector/Net 6.8.0, a new version of the all-managed .NET driver for MySQL has been released. This is an alpha release for 6.8.x and it's not recommended for production environments.It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.8.0 version of MySQL Connector/Net has support for Entity Framework 6.0 including: - Async Query and Save- Code-Based Configuration- Dependency Resolution- DbSet.AddRange/RemoveRange- Code First Mapping to Insert/Update/Delete Stored Procedures - Configurable Migrations History Table- DbContext can now be created with a DbConnection that is already opened- Custom Code First Conventions The release is available to download at http://dev.mysql.com/downloads/connector/net/#downloads Documentation-------------------------------------You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.6/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows You can also post questions on our forums at http://forums.mysql.com/ Enjoy and thanks for the support! Connector/NET Team

    Read the article

  • Build Controller status Unavailable issue in TFS2010

    - by jehan
    I ran into this problem few days back, I was not able to run the builds because the Build Controller was showing Status as Unavailable. It was showing the below exception: There was no endpoint listening at http://fullmachinename:9191/Build/v3.0/Services/Controller/2 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. After trying out few things, I looked at below Build Service Properties.   Then, I did below modifications to the Build Services Properties: 1)      Changed the Local Build Service Endpoint(incoming) from http://machinename.domain.com:9191 to http://machinename:9191 2)      Changed the Connect to Team Project Collection (outgoing) from localhost to machine name. http://localhost:8080/tfs/defaultCollection to http://machinename:8080/tfs/DefaultCollection   After that Started the Build Services and it fixed the issue, the Build Controller was showing Available Status and was able to run the builds.

    Read the article

  • redirect non-www to www while preserving protocol

    - by Waleed Hamra
    I am aware of the fact that there are tons of questions in this section and in server fault dealing with redirections from non-www to www URLs. But I couldn't find one dealing with this issue while preserving protocol. I am no mod-rewrite expert, and my code is just copy/pasted... here's what i have: RewriteCond %{HTTP_HOST} ^domain.tld$ [NC] RewriteRule ^(.*)$ http://www.domain.tld$1 [R=301,L] So now http://domain.tld and https://domain.tld are forwarded to http://domain.tld How do i make it so that https stays on https while http stays on http?

    Read the article

  • Clickworthy tweets, the sequel&hellip;

    - by Chris Williams
    Twitter moves fast, and if you don’t stay on top of it, you can miss a lot. I don’t follow a ton of people, but I combine it with topic searches. Here are a few things I’ve found that are worth your time and attention, especially if you’re into video games… development or playing: The 15 Greatest Sci-Fi/Horror Games for the Commodore 64 - http://moe.vg/bovATG  (via @jlist)  Practical Tactics for Dealing with Haters! - http://www.fourhourworkweek.com/blog/2010/05/18/tim-ferriss-scam-practical-tactics-for-dealing-with-haters/ (via @The_Zman) Assassin’s Creed 2 + $10 Video Game Credit + $5 MP3 Credit - $24.99 on Amazon.com – http://amzn.to/bvRI9h (via @Assassin10k) Make Small Good – A design article about not trying to compete with ginormous AAA multimillion dollar titles. - http://www.gamasutra.com/blogs/AlexanderBrandon/20100518/5067/Make_Small_Good.php (via @Kei_tchan) (CW: Excellent article, I do this a lot in my roguelike games!) Purposes for Randomization in Game Design – http://bit.ly/cAH7PG  (via @gamasutra)

    Read the article

  • Filezilla Install Problem: Hash Sum Mismatch

    - by kyleskool
    I'm new to the Ubuntu scene, and I tried to install Filezilla today by going to terminal and typing "sudo apt-get install filezilla", and got this error: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/universe/w/wxwidgets2.8/libwxbase2.8-0_2.8.12.1-6ubuntu2_amd64.deb Hash Sum mismatch Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/universe/w/wxwidgets2.8/libwxgtk2.8-0_2.8.12.1-6ubuntu2_amd64.deb Hash Sum mismatch Failed to fetch http://universe/t/tinyxml/libtinyxml2.6.2_2.6.2-1build1_amd64.deb Hash Sum mismatch Failed to fetch http://universe/f/filezilla/filezilla-common_3.5.3-1ubuntu2_all.deb Hash Sum mismatch Failed to fetch http://universe/f/filezilla/filezilla_3.5.3-1ubuntu2_amd64.deb Hash Sum mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running it again with "--fix-missing" appended to the command didn't work, nor did running apt-get update. Any suggestion? Thanks!

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Cookbook: SES and UCM setup

    - by George Maggessy
    The purpose of this post is to guide you setting up the integration between UCM and SES. On my next post I’ll show different approaches to integrate WebCenter Portal, UCM and SES based on some common scenarios. Let’s get started. WebCenter Content Configuration WebCenter Content has a component that adds functionality to the content server to allow it to be searched via the Oracle SES. To enable the component installation, go to Administration -&gt; Admin Server and select SESCrawlerExport. Click the update button and restart UCM_server1 managed server. Once the managed server is back, we’ll configure the component. In the menu, under Administration you should see SESCrawlerExport. Click on the link. You’ll see the window below. Click on Configure SESCrawlerExport. Configure the values below: Hostname: SES hostname. Feed Location: Directory where data feeds will be saved. Metadata List: List of metadata that will be searchable by SES. After updating the values click on the Update button. Come back to the SESCrawlerExport Administration UI and click on Take Snapshot button. It will create the data feeds in the specified Feed Location. To check if the correct configuration was done, please access the following URL http://&lt;ucm_server&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWLOAD_CONFIG&amp;source=default. It should download config file in the format below: &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;rsscrawler xmlns="http://xmlns.oracle.com/search/rsscrawlerconfig"&gt; &lt;feedLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONTROL&amp;source=default]]&gt;&lt;/feedLocation&gt; &lt;errorFileLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_STATUS&amp;IsJava=1&amp;source=default&amp;StatusFeed=]]&gt;&lt;/errorFileLocation&gt; &lt;feedType&gt;controlFeed&lt;/feedType&gt; &lt;sourceName&gt;default&lt;/sourceName&gt; &lt;securityType&gt;attributeBased&lt;/securityType&gt; &lt;securityAttribute name="Account" grant="true"/&gt; &lt;securityAttribute name="DocSecurityGroup" grant="true"/&gt; &lt;securityAttribute name="Collab" grant="true"/&gt; &lt;/rsscrawler&gt; Make sure Account and DocSecurityGroup values are true. SES Configuration Let’s start by configuring the Identity Plug-ins in SES. Go to Global Settings -&gt; System -&gt; Identity Management Setup. Select Oracle Content Server and click the Activate button. We’ll populate the following values: HTTP endpoint for authentication: URL to WebCenter Content. Notice that /cs/idcplg was added at the end of the URL. Admin User: UCM Admin user. This user must have access to all CPOE content. Password: Password to Admin user. Authentication Type: NATIVE. Go back to the Home tab and click on Sources on the top left. Select Oracle Content Server on the right and click the Create button. Configuration URL: URL that point to the configuration file. Example: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONFIG&amp;source=default. User ID: UCM Admin user. Password: Password to Admin user. Click on the Authorization tab and add the appropriate values to the fields below. Make sure you see the ACCOUNT and DOCSECURITYGROUP security attributes at the end of the page. HTTP endpoint for authorization: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg. Display URL prefix: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs. Administrator user: UCM Admin user. Administrator password. On the Document Types tab, add the documents that should be indexed by SES. As our last step, we’ll configure the Federation Trusted Entities under Global Settings. Entity Name: The user must be present in both the identity management server configured for your WebCenter application and the identity management server configured for Oracle SES. For instance, I used weblogic in my sample. Password: Entity user password.\ Now you are ready to test the integration on the SES UI: http://&lt;ses hostname&gt;:&lt;port&gt;/search/query/.

    Read the article

  • Move subdomain into subdirectory SEO question

    - by JMC
    I have read this article: http://www.mattcutts.com/blog/subdomains-and-subdirectories/ But I'm not 100% clear if moving my subdomain website into a subdirectory on the main domain would change anything related to SEO. I inherited this structure: Informational site related to our specific industry lives at: http://website.com StoreFront where we sell product related to our industry lives at: http://store.website.com The informational site gives a lot of good information on how to use the products we sell. The storefront is primarily used for the ecommerce function of selling the products, but there is a lot of info specific to the products on that site. Question: Is our main domain http://website.com getting page rank credit for the product info contained at http://store.website.com? Would there be a benefit to changing the structure?

    Read the article

  • apt-get not working

    - by Dave Daniels
    Everything I try with apt-get fails. I am installing Ubuntu server for the first time. It is version 12.04 LTS. When I run: apt-get update I get failed to fetch http://gb.whatever goes here...... If I run apt-get install install build-essential I get "unable to locate package build-essential" I have looked at the sources.list but do not know what should and shouldn't be in there. This is the current content of sources.list: # See help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu precise main restricted deb-src http://gb.archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu precise-updates main restricted

    Read the article

  • VirtualServer reverseproxy works locally, but not from client

    - by Yep
    Setup: 2 Webservers pointed to 127.0.0.1:8080 and :8081. Curl validates they work as expected. Apache with the following virt hosts: NameVirtualHost 192.168.1.1:80 <VirtualHost 192.168.1.1:80> ServerAdmin [email protected] ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ServerName 192.168.1.1 ServerAlias http://192.168.1.1 </VirtualHost> NameVirtualHost 192.168.1.2:80 <VirtualHost 192.168.1.2:80> ServerAdmin [email protected] ProxyPass / http://127.0.0.1:8081/ ProxyPassReverse / http://127.0.0.1:8081/ ServerName 192.168.1.2 ServerAlias http://192.168.1.2 </VirtualHost> On the server I can curl to the virtualhosts and receive appropriate responses. (curl 192.168.1.1 gives me the webservers response from localhost:8080, etc) remote hosts cannot however connect to 192.168.1.1 or .2 at all. What am I missing? Re: comments Yes, the default directory Directive is still in place. # Deny access to root file system <Directory /> Options None AllowOverride None Order Deny,Allow deny from all </Directory> No apache logs are generated when trying to reach 192.168.1.1 remotely. They do get generated when curl from local. If I point the webservers to *:8080 and *:8081 instead of binding to localhost, I can access them from a remote host via 192.168.1.1 and 192.168.1.2 if i specify the 8080 and 8081 ports (both ports work on both IP's, which is what I'm trying to avoid with apache reverse proxy bind to 80 on each interface) Edit2: curl verbose output: (similar for second webserver, and for 127.0.0.1:portnum) [user@host mingle_12_2_1]$ curl -v 192.168.1.1 * About to connect() to 192.168.1.1 port 80 * Trying 192.168.1.1... connected * Connected to 192.168.1.1 (192.168.1.1) port 80 > GET / HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Host: 192.168.1.1 > Accept: */* > < HTTP/1.1 302 Found < Date: Tue, 16 Oct 2012 16:22:08 GMT < Server: Jetty(6.1.19) < Cache-Control: no-cache < Location: http://192.168.1.1/install < X-Runtime: 130 < Content-Type: text/html; charset=utf-8 < Content-Length: 94 < Connection: close Closing connection #0 <html><body>You are being <a href="http://192.168.1.1/install">redirected</a>.</body></html> log from the request local 192.168.1.1 - - [16/Oct/2012:12:22:08 -0400] "GET / HTTP/1.1" 302 94 no apache access log or error log generated when requests from remote clients.

    Read the article

  • Google still has record of my old site URL - what to do?

    - by Mayeenul Islam
    I had a blog site, i.e. http://example2.com, then I bought a new domain, i.e. http://example.com and 301 permanent redirected example2.com to example.com. But when I get into the Google Webmaster Tools, if I get some 404, and then click into the link and see the "Linked from" tab, it shows some links like: http://example.com/post-1 http://example2.com/feed http://example2.com/post-1 According to Google, if you change your domain, just use a redirection for at least 4-6 months, but it almost passed. Then why Google has still traces of my old site? The issue is important, because I don't want to pay for the old domain anymore. I tried deleting my existing sitemap.xml and recreating it from the new site, but still such links are stored. What could I do?

    Read the article

  • Live Webcasts of the Transit of Venus

    - by TATWORTH
    Space.com have published a list of webcams for the Transit of Venus at http://www.space.com/14568-venus-transits-sun-2012-skywatching.htmlLive Webcasts Around the World Here is a list of observatories and organizations providing live webcasts on June 5 of the Venus transit of 2012: NASA webcast from Mauna Kea, Hawaii: http://venustransit.nasa.gov/2012/transit/webcast.php Exploratorium (in San Francisco, Calif.) webcast from Mauna Loa, Hawaii: http://www.exploratorium.edu/venus/ Slooh Space Camera telescope feed from around the world: http://www.slooh.com/transit-of-venus/ Astronomers Without Borders webcast from the Mount Wilson Observatory in California: http://www.astronomerswithoutborders.org/projects/transit-of-venus.htmlI intend to publish a single list later.

    Read the article

  • Dot Net Code Coverage Test Tools - there is now a choice

    - by TATWORTH
    I have been pleasantly surprised this week to discover that there is a choice of tools for measuring Code Coverage. If you have Visual Studio Team edition, then if you are using MSTEST, then you have built-in code coverage, however even then you may need a standalone tool. The tools I have found are (costs are per seat): 1) NCover  http://www.ncover.com/ (from $199 to $658 per seat) I have used it but it is very expensive. 2) PartCover http://sourceforge.net/projects/partcover/ - Free!  Steep initial learning curve to get it to work. 3) Dot Cover from http://www.jetbrains.com/dotcover/ - Personal licence - normally $99 but at a introductory price of $75 and free for OpenSource Developers (details at http://www.jetbrains.com/dotcover/buy/buy.jsp#opensource_) 4) Test Matrix from http://submain.com/products/testmatrix.aspx - $149 for a licence

    Read the article

  • New Worklist features on 12.1.3

    - by Vijay Shanmugam
    Following new Worklist features are available on E-Business Suite 12.1.3 via Patch 13646173. Ability to view comments on top of a notification If an action is performed on a notification such as Reassign, Request for Information or Provide Information, the recipient of the notification will see who performed the last action and the associated comment on top of the notification. Reassigning a request for information notification If an approver requests more information on a notification from it's submitter, the submitter now has two options Answer Request for More Information Transfer Request for More Information If the submitter thinks the requested information can be provided by another user, he/she can transfer the request to the other user. Please note that only Transfer is supported for Request for More Information. Once transferred, the submitter cannot access the notification and provide the requested information. Use actual sent date when reassigning a notification The Sent field in notification header always showed the date on which the notification was first created. If the notification was later reassigned, the Sent date was not updated to show the last action date. This caused problems in following scenario Approval notification was sent to JACK on 01-JAN-2012 JACK waited for 10 days before reassigning to JILL on 10-JAN-2012 JILL does not see the notification as sent on 10-JAN-2012, instead sees it as sent on 01-JAN-2012 Although the notification was originally created on 01-JAN-2012, it was sent to JILL only on 10-JAN-2012 The enhancement now shows the correct sent date in Worklist and Notification Details page. Figure 1 - Depicts all the above 3 features Related Action History for response required notification So far it was possible to embed Action History of an response-required notification into another FYI notification using #RELATED_HISTORY attribute (Please refer to Workflow Developer Guide for details about this attribute). The enhancement now enables developers to embed Action History of one response-required notification into another response-required notification. To embed Action History of one response-required notification into another, create message attribute #RELATED_HISTORY. To this attribute set a value during run-time in the following format. {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAMEThe TITLE, ITEM_TYPE and ITEM_KEY are optional values. TITLE is used as Related Action History header title. If TITLE is not present, then a default title "Related Action History" is shown. If ITEM_TYPE is present and ITEM_KEY is not, For Example: {TITLE}[ITEM_TYPE]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from parent item type of the current item. If both ITEM_TYPE and ITEM_KEY is present, For Example: {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from that specific instance activity. Figure 2 - Depicts Related Action History feature

    Read the article

  • Azure Futures - Distributed Computing and Number Crunching

    - by JoshReuben
    "the biggest Azure customers today are the ones using HPC on-premises at the current time" - http://www.zdnet.com/blog/microsoft/windows-azure-futures-turning-the-cloud-into-a-supercomputer/8592?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+zdnet%2Fmicrosoft+%28ZDNet+All+About+Microsoft%29&utm_content=Google+Reader   Orleans Framework for cloud computing - http://research.microsoft.com/en-us/projects/orleans     HPC on Azure - http://www.zdnet.com/blog/microsoft/microsoft-finalizes-its-latest-supercomputing-operating-system-release/7414   Dryad is Microsoft’s competitor to Google MapReduce and Apache Hadoop  - http://www.zdnet.com/blog/microsoft/microsoft-takes-a-step-toward-commercializing-its-dryad-distributed-computing-technologies/8255?tag=mantle_skin;content   SQL Server Analysis Services DataMining in the cloud - http://www.sqlmag.com/article/reporting2/azure-data-mining-in-the-cloud.aspx

    Read the article

  • YouTube: CoffeeScript Rocks (in NetBeans IDE)

    - by Geertjan
    CoffeeScript is a handy preprocessor for JavaScript, as shown in a quick demo below on YouTube, using the CoffeeScript plugin for NetBeans IDE. Right now, the NetBeans Plugin Portal doesn't have a CoffeeScript plugin for NetBeans IDE 7.4, but not to worry, the NetBeans IDE 7.3 plugin works just fine. http://plugins.netbeans.org/plugin/39007/coffeescript-netbeans Here's a small YouTube clip I made today showing how it all works: Also read this very handy and detailed NetBeans tutorial, on which I based the demo above: https://netbeans.org/kb/docs/web/js-toolkits-jquery.html Related info: http://www.youtube.com/watch?v=QgqVh_KpVKY http://www.ibm.com/developerworks/library/wa-coffee1/ http://blog.sethladd.com/2012/01/vanilla-dart-ftw.html http://api.jquery.com/fadeOut/

    Read the article

  • Trouble in Nginx hotlink protection

    - by Ayaz Malik
    I am trying to implement image hotlink protection problem in nginx and I need help. I have a huge issue of my site's images being submitted to social networks like StumbleUpon with a direct link like http://example.com/xxxxx.jpg Which sometimes gets huge traffic and increases CPU usage and bandwidth usage. I want to block direct access to my images from other referrers and protect them from being hotlinked. Here is the code from my vhost.conf server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } For hotlink protection I added this code location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } This is the current nginx code for this domain, but it didn't work: server { access_log off; error_log logs/vhost-error_log warn; listen 80; server_name mydomain.com www.mydomain.com; # uncomment location below to make nginx serve static files instead of Apache # NOTE this will cause issues with bandwidth accounting as files wont be logged location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ { root /home/username/public_html; expires 1d; } root /home/mydomain/public_html; } location ~* (\.jpg|\.png|\.gif|\.jpeg)$ { valid_referers blocked www.mydomain.com mydomain.com; if ($invalid_referer) { return 403; } location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; # you can increase proxy_buffers here to suppress "an upstream response # is buffered to a temporary file" warning proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 30s; proxy_redirect http://www.mydomain.com:81 http://www.mydomain.com; proxy_redirect http://mydomain.com:81 http://mydomain.com; proxy_pass http://ip_address/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; expires 24h; } } How can I fix this?

    Read the article

  • "Malformed line 6" error in my /etc/apt/sources.list

    - by Odi1215
    I'm new to Ubuntu so I don't really know much yet. I encountered this problem while on the terminal: E: Malformed line 6 in source list /etc/apt/sources.list (dist parse) E: The list of sources could not be read. What should I do? Help would be much appreciated. Here's my source.list: # /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ precise-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse deb http://archive.canonical.com/ partner deb-src http://archive.canonical.com/ partner /etc/apt/sources.list

    Read the article

  • How do I stop Google indexing my main page as https [duplicate]

    - by user2897488
    This question already has an answer here: https:// search results appearing on Google for purely http:// site 2 answers Due to historic reasons, we have things set up so that "www.mydomain.com" redirects to "store.mydomain.com". This has worked perfectly fine until recently, when Google appears to be sending visitors to "https:// www.mydomain.com" which doesn't have an SSL-certificate (and never has). Strangely, its only the first link that goes to "https:// www.mydomain.com", all other links point correctly to "http:// store.mydomain.com". Because there is no certificate on the "www" version, users are getting an error message. How do I make Google revert to pointing the main link at "http:// store.mydomain.com" (or even "http:// www.mydomain.com.") If I remove "https:// www.mydomain.com" from Google webmaster tools, will this also remove the redirected page ("http:// store.mydomain.com)? Thanks.

    Read the article

  • How To Track "Similar Product/Page" Links In Internal Site

    - by Petra Barus
    So I just created a new widget that would show up in a product page in my site. This widget will show several products similar to the product that is displayed in the current page. The purpose is to help users compare similar products. Let's say in the product page A http://domain/products/A The Similar Products widget will show http://domain/products/B http://domain/products/C http://domain/products/D http://domain/products/E My question is how to track this "Product B page were visited X times from Product A page via Similar Product widget"? (And there is also chance that Product B will show up in the widget on Product C page) I have this idea using the Event feature from Google Analytics. But I'm still not sure if it is or what is the common best practice for this.

    Read the article

  • NDepend v4 has just been released!

    - by Vincent Maverick Durano
    Few months ago I blogged about the release of NDepend v3 Continuous Integration and Reporting Capabilities here. Recently, the NDepend team has released v4 which comes with code rules based on C# LINQ queries (CQLinq), this make code ruling so much more powerful and flexible. There are couple of new rules available like: http://www.ndepend.com/DefaultRules/webframe?Q_UI_layer_shouldn't_use_directly_DB_types.html http://www.ndepend.com/DefaultRules/webframe?Q_Types_with_disposable_instance_fields_must_be_disposable.html http://www.ndepend.com/DefaultRules/webframe?Q_Avoid_the_Singleton_pattern.html http://www.ndepend.com/DefaultRules/webframe?Q_Avoid_making_complex_methods_even_more_complex_(Source_CC).html v4 also provides NDepend.API and a dozen of open-source code tool developed with NDepend.API (the Power Tools) http://www.ndepend.com/API/webframe.html

    Read the article

  • "Index of ..." directory's files listing

    - by Tony
    On my courses we've got homework on site in folders such as: http://example.com/files/tasks1-edc34rtgfds http://example.com/files/tasks2-0bg454fgerg http://example.com/files/tasks3-h1dlkjiojo8 ... Each tasksi-xxxxxxxxxxx is a folder with 11 random characters at the end. And when you view the above URLs in browser you can see Index of /tasksi-xxxxxxxxx with all the files in that folder. When you view http://example.com/files/ you can see only empty html with words "Hello, world". The problem is that you can't look into the next task without knowledge of its URL. So for example we've got the URLs for tasks1 and tasks2, and we can't guess what tasks3 URL will be (as we need to know the 11 random characters at the end) How can I get the list of all directories? (Is there a way to type something like http://example.com/files/task1-aflafjal343/..? or another way?) I want to see all upcoming homework tasks.

    Read the article

  • Why do we see multiple PID's related to same application/owner for http like this below. What does this mean?

    - by Muthukumar Alagappan
    Why do we see multiple PID's related to same application/owner for http like this below. What does this mean?. $ ps -ef | grep httpd | grep -v grep apache 9619 20181 0 07:08 ? 00:00:03 /usr/sbin/httpd apache 10092 20181 0 Jan24 ? 00:00:07 /usr/sbin/httpd apache 13086 20181 0 06:09 ? 00:00:00 /usr/sbin/httpd apache 13717 20181 0 Jan25 ? 00:00:01 /usr/sbin/httpd apache 14730 20181 0 07:13 ? 00:00:01 /usr/sbin/httpd apache 16359 20181 0 09:54 ? 00:00:00 /usr/sbin/httpd root 20181 1 0 2011 ? 00:00:01 /usr/sbin/httpd apache 21450 20181 0 09:55 ? 00:00:00 /usr/sbin/httpd

    Read the article

  • How to handle CNAME host redirect to virtual directory?

    - by esac
    I have an internal website and virtual directory http://server2012/logs. I created a CNAME on my DNS server as LOGS - server2012. I would like to set it up so that http://LOGS redirects to http://server2012/logs. Ideally, I would still want it so that all pages appear in the browser as being off from the LOGS URL. So http://LOGS/network.html?site=32 is what is displayed in the browser, but it is really being served from http://server2012/logs/network.html?site=32. I've looked at URL rewrite, but can't seem to get to work.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >