Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 457/817 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • How should I deploy my JVM-based web application on ubuntu?

    - by Pieter Breed
    I've developed a web application using clojure/compojure (JVM based) and while developing I tested it using embedded jetty that runs on 0.0.0.0:8080. I would now like to deploy it to run on port 80 on ubuntu. I do dynamic virtual hosting, so any request for any host that arrives on port 80 should be handled by my application. The issues that worries me are: I can still run it embedded but I'm worried about running my app as root (needed for binding to port 80). I'm not sure if I can 'give up root' when in the JVM. Do I need to be concerned by this? besides, serving web applications is a known problem and I should be using known solutions for this (jetty or tomcat) but especially tomcat seems very heavy weight. Besides, I only have one application that listens to /* and does routing internally. (with compojure/ring). What I'm trying to say with this is that tomcat by default assigns WARs to subfolders which I don't want. So basically what I need is some very safe way of binding to port 80 on ubuntu that can with minimal interference send all requests to my app. Any ideas?

    Read the article

  • How Does EoR Design Work with Multi-tiered Data Center Topology

    - by S.C.
    I just did a ton of reading about the different multi-tier network topology options as outlined by Cisco, and now that I'm looking at the physical options (End of Row (EoR) vs Top of Rack(ToR)), I find myself confused about how these fit into the logical constructs. With ToR it also maps 1:1: at the top of each rack there is a switch(es) that essentially act as the access layer. They connect via fiber to other switches, maybe chassis-based, that act as the aggregation layer, that then connect to the core layer. With EoR it seems that the servers are connecting directly to the aggregation layer, skipping the access layer all together, by plugging directly into what are typically chassis switches. In EoR then is the standard 3-tier model now a 2-tier model: the servers go to the chassis switch which goes straight to the core switch? The reason it matters to me is that my understanding was that the 3-tier model was more desirable due to less complexity. The agg switch pair acts as default gateway and does routing; if you use up all of your ports in your agg layer pair it's much more complicated to add additional switches, than simply adding more switches at the access layer. Are there other downsides to this layout? Does this 3-tier architecture still apply in some way in EoR? Thanks.

    Read the article

  • Is it possible to transfer Google Authorship?

    - by Stephanus Yanaputra
    Is it possible to transfer Google Authorship from one account to another? Scenario: I have user A who is a legitimate Google Author in my WordPress site. When I search the 'keyword' in Google, his name and photo will show up in the search result. One day A left the company, so we don't want to use his name again as an author, and transfer it to another person which is B. Technically speaking, I can just alter the display name and Google+ Profile URL of A to B. And then I probably can notify Google of the changes happening. But what will happen then? Will Google get confused and think that I'm doing scam? Is doing this action even correct in the first place?

    Read the article

  • Do subdomains need to be defined through domain registrar?

    - by Johnny
    I have bought a new domain name from GoDaddy. Let's say it is abcd.com. On GoDaddy's DNS Managing page, I changed A(Host) part to @ = 74.125.232.215 which is www.google.co.uk's IP address. Now if I type www.abcd.com, it directly goes to www.google.co.uk. But if I type http://test.abcd.com, it cannot be loaded. Do I need to define every subdomain through GoDaddy? Is this how it works? P.S. Amazon EC2 directly generates a subdomain for users to reach their virtual PCs. It cannot be domain registrar dependant. P.S.2. Same question for using "www2" at the start of url.

    Read the article

  • Extracting meta tags attribute using wget [migrated]

    - by Amit
    I have a file having some URLs per line. I need to extract the "keywords" present in the tags i.e. if there is meta tag for "keywords" then i want to get "content" value for it. Example: if the web-page has this meta-tag then for that URL i want "wikipedia,encyclopedia" to be extracted. One approach is to download the web-page using "wget" and then parse it using some standard HTML parser. I was wondering is there any better way to do this without downloading the entire web-page.

    Read the article

  • '/'var/www/' vs '/home/$USER/public_html'

    - by OrganizedFellow
    I recently started using Ubuntu as a LAMP server. I've come across plenty of tutorials that say to place the files at '/var/www/' and I've also seen others that put them in '/home/$USER/public_html/'. During my testing and figuring stuff out, I was successfully able to view a test site URL from each location. Is one better than the other? I thought that maybe it was just preference. But the more I think about it, the more I want to keep all my work in my Home folder.

    Read the article

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • Google won't display site

    - by Markasoftware
    My website (markasoftware.getenjoyment.net) doesn't seem to be indexed properly by Google (I haven't tried other search engines). When I type in the URL of my site it appears right at the top of the list like it should. When I type in the entire contents of the title, however, the site doesn't appear! The title is quite long (Thermonuclear War Game Online: Thermonuclear War By Mark) and it has little (if any) competition. Have I been punished by Google for some reason, or is it something else? I have received zero hits from search engines. Can someone tell me why my site down't appear?

    Read the article

  • getting the user back where they came from with mod_form_auth

    - by bmargulies
    Using the mod_form_auth module in Apache HTTPD 2.4.3, I am looking for a way to have the user redirected to their original desired target after completing a login. That is, if I have a <Location /protected> ... form auth config here </Location> the user might browse to /protected/a, or to protected/b. In either case, they will be presented with the login form. However, as far as I can see, I must specific a single 'success' URL. I'm wondering if I'm missing some Apache feature that would allow me to, for example, cause the redirect to the login form go to something like: https://login.html?origTarget=/protected/a via some syntax on the AuthForLoginRequiredLocation statement?

    Read the article

  • help redirecting IP address

    - by Alice
    Google has indexed the IP address of my site rather than the domain, so now I'm trying to set up a 301 redirect that will redirect the IP address and all subsequent pages to the domain. I currently have something like this in my .htaccess file (however don't think it's working correctly?): RewriteCond %{HTTP_HOST} ^12.34.567.890 RewriteRule (.*) (domain address)/$1 [R=301,L] I've used various redirect checker tools and keep getting the message: "... not redirecting to any URL or the redirect is NOT SEARCH ENGINE FRIENDLY" Am I doing something wrong or is there something else I should be trying? Thanks! Alice

    Read the article

  • Redirect public traffic to a different subfolder, while local traffic remains unchanged

    - by ecnepsnai
    I would like to have local (intranet) HTTP traffic go to the /var/www/html folder while any public traffic goes to the subfolder, /var/www/html/public I've tried this configuration, with some variation, in httpd.conf <VirtualHost PRIVATE-IP> DocumentRoot /var/www/html ServerName ecn ErrorLog /var/www/logs/error/private CustomLog /var/www/logs/access/private common </VirtualHost> <VirtualHost PUBLIC-IP> DocumentRoot /var/www/html/public ServerName PUBLIC-DOMAIN-NAME ErrorLog /var/www/logs/error/public CustomLog /var/www/logs/access/public common </VirtualHost> PUBLIC-IP, PRIVATE-IP, and PUBLIC-DOMAIN name are all replaced with the correct values in the actual document. The problem is, local traffic can browse fine but remote traffic is directed to the root folder and getting 403d (because I have that folder blocked off through my .htaccess file). If I append /public to the URL it works fine.

    Read the article

  • SEO Benefits of adding a Tumblr feed to site

    - by Paul
    A client of ours has a CMS driven Blog in his hotel site - he would like to use the blog to add depth top his site and add seo benefits relating to the blogs content. The current blog is a basic header / text field and doesn't contain any tagging / meta features. Unfortunately we dont have a .net developer in our team to alter the existing blog and add meta / tagging and there isn't budget to hire one - so I considered using a Tumblr blog - setting it up externally - giving it a blog.hotelname.com address and feeding it into the existing page via tumblrs js - which basically does a document.write into the page - which we can style. I understand from a previous post (Poor CMS blog vs Tumblr embed as a general rule most search engines ignore JS created content - but will the above approach act as an improvement on the existing system for now - as the blog will be setup externally with its own url and also feed into the existing site? Cheers Paul

    Read the article

  • View the Replay of Fusion Applications - Financials Highlights Lunchtime Webinar

    - by Theresa Hickman
    If you missed the free one hour Webinar and training on Feb. 28 that I blogged about in a previous blog post, you can view the replay at your convenience. Fusion Applications – Financials Highlights URL: http://education.oracle.com/education/downloads/Fusion_Financials_LVCRecording/ If it asks you for a code, enter E1049. For those of you who pre-registered for this Webinar but due to the timezone difference could not attend, you should have also received an email from Oracle University with a link to the replay. Enjoy!  

    Read the article

  • How to tell google a blog article has been updated?

    - by Scott
    The URL of my posts has the publication date and slugged title, but how can I best show google search users that an article has been updated since its original publication? I was considering devoting a few characters of the meta description (e.g. "updated 2013-Aug-1), or doing so under the first h1 tag. I don't want to hurt the seo value of my site, but I also want to let users know that articles have been substantially updated since their publication. Is there a better way to do this?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • GA tracking utm query params after hashbang

    - by hybrid9
    We currently use a hashbang for the portion of our site that generates dynamic content which can also be deep linked. Our analytics team wants to use utm params to track the referral traffic from social networks. We are using Universal Analytics (analytics.js) as well as GTM. Will GA pick up the query parameters after the hashbang or does it always have to go before? For example: example.com/#!/some/content?utm_source=foo&utm_campaign=bar example.com?utm_source=foo&utm_campaign=bar/#!/some/content In #1, I'm concerned that the utm params won't be recorded and in #2 the page will break or the url could be incorrectly written. How does GA pull in those parameters - location.search? regex? Can I get away with using either?

    Read the article

  • Mangling traffic from a Mikrotik Router

    - by TiernanO
    I have a MikroTik powered Router in the house with a couple of internet connections (2 200/10Mb Cable modems and a 100/20Mb VDSL Line). I am using Mangle rules to set routing marks and NAT rules to do some load balancing, and everything seems to be going grand... But it only works for traffic from outside the router... Let me explain: I have 4 GigE ports on the machine, WAN1,2 and 3, and a LAN port named LAN1. All traffic from LAN1 is getting mangled (as it should be) but traffic from the load router itself (proxy traffic, IPv6 tunnels, VPN connections) are not being mangled. They get the first route to 0.0.0.0/0, which in my case is WAN2, and stick with it. So, how do I get traffic from the local router to be mangled? Originally it was proxy traffic that caused the problem, but now with IPv6 and VPN, they are more important to be mangled... last time i enabled IPv6 traffic, all traffic only went though WAN2, and the rest where unused... Any ideas?

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • SQL Server 2008 R2: StreamInsight changes at RTM: Event Flow Debugger and Management Interface Secur

    - by Greg Low
    In CTP3, I found setting up the StreamInsight Event Flow Debugger fairly easy. For RTM, a number of security changes were made. First config: To be able to connect to the management interface, your user must be added to the Performance Log Users group. After you make this change, you must log off and log back on as the token is only added to your login token when you log on. I forgot this and spent ages trying to work out why I couldn't connect. Second config: You need to reserve the URL that the...(read more)

    Read the article

  • Can I make query strings produce separate pages?

    - by John Smith
    I have a profile page with a URL like so: localhost/profile.php/?username=Bob I was wondering, if I had a separate <title> which changed according to the username, would they produce separate pages in the google search results? How do I tell Google to only use the username string or does it search within the title? On a similar note, how would I create a separate page with the username like so: localhost/bob instead of a query string like facebook does. Do that make a new file for each user?

    Read the article

  • Clean SOAP Calls from iOS - SudzC

    - by Richard Jones
    This is worth another mention. If you need to call SOAP web-services from iOS or Javascript, and lets face who doesn't. http://SudzC.com really delivers. You give it the URL to you're WSDL file (or upload a file) and it just spits out a ready to go Xcode project. I would point out that to get it to work 100% I changed line 204, in Soap.m (commented out line is old version, mine is below) //if([child respondsToSelector:@selector(name)] && [[child name] isEqual: name]) { if([child respondsToSelector:@selector(name)] && [[child name] hasSuffix: name]) { I consumed a Microsoft Dynamics NAV set of web-service pages no problem (and they tend to be fairly complex WSDL definitions).

    Read the article

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt74tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • 301 re-direct all external links to new domain

    - by Dean Legg
    I have changed the main domain to a sub-domain & would like to re-direct all external links to the new sub domain. Have read a few articles but having no luck editing the .htaccess as it might be interfering with all the rules in there. Old: www.example.co.uk New: https://secure.example.co.uk The current rules are quite handy because it seems to have sorted out the structure for all internal links. It has even updated the file path for images (or this could just be wordpress as the url was updated under general settings). This is the current .htaccess <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • In Stud, which Private RSA Key should be concatenated in the x509 SSL certificate pem file to avoid "self-signed" browser warning?

    - by Aaron
    I'm trying to implement Stud as an SSL termination point before HAProxy as a proof of concept for WebSockets routing. My domain registrar Gandi.net offers free 1-year SSL certs. Through OpenSSL, I generated a CSR which gave me two files: domain.key domain.csr I gave domain.csr to my trusted authority and they gave me two files: domain.cert GandiStandardSSLCA.pem (I think this is referred to as the intermediary cert?) This is where I encountered friction: Stud, which uses OpenSSL, expects there to be an "rsa private key" in the "pem-file" - which it describes as "SSL x509 certificate file. REQUIRED." If I add the domain.key to the bottom of Stud's pem-file, Stud will start but I receive the browser warning saying "The certificate is self-signed." If I omit the domain.key Stud will not start and throws an error triggered by an OpenSSL function that appears intended to determine whether or not my "pem-file" contains an "RSA Private Key". At this point I cannot determine whether the problem is: Free SSL cert will always be self-signed and will always cause browser to present warning I'm just not using Stud correctly I'm using the wrong "RSA private key" The CA domain cert, the intermediary cert, and the private key are in the wrong order.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >