Search Results

Search found 76098 results on 3044 pages for 'http gdata youtube com'.

Page 115/3044 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • How to change the mail domain server so it's not displaying IP? Changing [email protected].com to [email protected]

    - by Pavel
    Hi guys. I'm kinda a noob as a server admin so please bear with me. I've installed postfix mail server and everything is working fine but the 'from' box is displaying name@IPaddress.com. I want to set it up so it displays domainname.com instead of IP. I just hope you know what I mean. My main.cf in postfix folder looks like this: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.thevinylfactory alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.thevinylfactory.com, thevinylfactory, localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all Can anyone help me with this one? If you need any more details please let me know. Thanks in advance!

    Read the article

  • access_log item w/out IP. Starts with "::1 - - [<date>]"

    - by Meltemi
    Looking at our Apache log I see normal requests like: 174.133.xxx.xxx - - [20/May/2010:17:36:44 -0700] "GET /index.html HTTP/1.1" 200 2004 but every so often i get a cluster of these w/out an IP address. ::1 - - [20/May/2010:18:47:21 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:22 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:23 -0700] "OPTIONS * HTTP/1.0" 200 - what do they mean and curious what causes them?

    Read the article

  • ActiveMQ broker configuration error when specifying persistenceAdapter: "One of '{WC[##other:"http:/

    - by Joe
    I am setting up a simple ActiveMQ embedded broker. It works fine, until I try to configure a persistence adapter. I am basically just copying the configuration from http://activemq.apache.org/persistence.html#Persistence-ConfiguringKahaPersistence. When I add this configuration to my Spring configuration, like so: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:amq="http://activemq.apache.org/schema/core" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.0.xsd"> <amq:broker useJmx="true" persistent="true" brokerName="localhost"> <amq:transportConnectors> <amq:transportConnector name="vm" uri="vm://localhost"/> </amq:transportConnectors> <amq:persistenceAdapter> <amq:kahaPersistenceAdapter directory="activemq-data" maxDataFileLength="33554432"/> </amq:persistenceAdapter> </amq:broker> </beans> I get the error: cvc-complex-type.2.4.a: Invalid content was found starting with element 'amq:persistenceAdapter'. One of '{WC[##other:"http://activemq.apache.org/schema/core"]}' is expected. When I take out the amq:persistenceAdapter element, it works fine. The same error happens no matter which persistence adapter I include in the body, e.g. jdbc, journal, etc. Any help would be greatly appreciated. Thanks.

    Read the article

  • set a cookie while sending PERL HTTP::Request

    - by dexter
    i have created HTTP::Request which looks like this: #!/usr/bin/perl require HTTP::Request; require LWP::UserAgent; $request = HTTP::Request->new(GET => 'http://www.google.com/'); $ua = LWP::UserAgent->new; $ua->cookie_jar({file => "testcookies.txt",autosave =>1}); $response = $ua->request($request); if($response->is_success){ print "sucess\n"; print $response->code; } else { print "fail\n"; die $response->code; } now, When i send Request: $request = HTTP::Request->new(GET => 'http://www.google.com/'); $ua = LWP::UserAgent->new; $ua->cookie_jar({file => "testcookies.txt",autosave =>1}); i want to set a cookie which might look like.. $request = HTTP::Request->new(GET => 'http://www.google.com/'); $ua = LWP::UserAgent->new; $ua->new CGI::Cookie(-name=>"testCookie",-value=>"cookieValue"); $ua->cookie_jar({file => "testcookies.txt"}); gives error though. AND, want to log the http response codes in the file please help thank you

    Read the article

  • Lightbox-style dialog shows below YouTube movie on Mac OS 10.6

    - by Mark
    This is a "but it works on my machine" one and could be tricky: I have a lightbox-style HTML dialog that shows a menu on top of a web page. It can be injected into any web page via a JavaScript bookmarklet. One of my users is trying to use it on YouTube.com with the result that the flash movie is rendered on top of the dialog (a div with high z-index). I can't reproduce this. It works just fine for me. The dialog shows up on top of everything else on youtube.com, the video included. I had him save the page in Safari as Webarchive and send it to me. Even that shows the menu rendered correctly for me. I use the exact same version of Safari (4.0.5/531.22.7) and Flash (10.1 r53, latest beta). Only difference I could find is that he uses Snow Leopard (10.6.6) and I "only" 10.5.8. Has anybody noticed similar problems? I'm afraid that the usual wmode recommendation won't solve this (I tried & it works on my machine anyway)... Thanks! Mark

    Read the article

  • What is difference between RegSvr and RegServer?

    - by Rahul
    Why there is difference in registering COM component for 32-bit and 64-bit? I mean at one point you have to use like this, RegSvr32 COM.exe or RegSvr32 COM.dll On 64-bit OS you have to use like, COM.exe /RegServer COM.exe /RegSvr Does /RegServer and /RegSvr are same or different. If different then what is the difference. Thanks in advance, Rahul

    Read the article

  • SEO + international sites? country.domain.com or domain.country?

    - by Pure.Krome
    Hi folks, is it better to have seperate country specific domains (which costs more money) or subdomains which define the country, for better SEO? eg. stackoverflow.com stackoverflow.com.au stackoverflow.co.uk vs stackoverflow.com au.stackoverflow.com uk.stackoverflow.com Assumption: int the search engine web master tools, each subdomain are associated to a country. eg. au.stackoverflow.com is associated to the country Australia. cheers! Update I understand that both methods do work, especially when i utilize the assumption, listed above. The question is about: Which method is better? Is there such a small SEO difference between them? Is the first method way way way better than the second with getting better SEO results? Update #2 A number of folks have suggested that the following is a good/better approach: stackoverflow.com/ stackoverflow.com/au stackoverflow.com/uk By adding a country specific iso code to the end of the url/the first folder of the domain can be recognised as the country. But a number of SEO mates have suggested that this is a valuable waste of folder level space. Er.. how can i explain. Ok, it's been suggested by some SEO experts that if the number of levels or folders in the domain exceeds 5 then the page drops dramatically in importance. Basically, you don't want to make it deep. As such, adding the country as the first level can be considered a waste, especially when it can be handled by the domain OR subdomain - hence the question :) So, any more thoughts on this? (Maybe SO is the wrong place to ask this question?)

    Read the article

  • com.jcraft.jsch.JSchException: UnknownHostKey

    - by Alex
    I don't know how SSH works and I think that's a simple question. How do I fix that exception: com.jcraft.jsch.JSchException: UnknownHostKey: mywebsite.com. RSA key fingerprint is 22:fb:ee:fe:18:cd:aa:9a:9c:78:89:9f:b4:78:75:b4 I know I should verify that key or something, but there is like zero documentation for Jsch. Here is my code it's really straightforward: import com.jcraft.jsch.JSch; import com.jcraft.jsch.Session; public class ssh{ public static void main(String[] arg){ try{ JSch jsch = new JSch(); //create SSH connection String host = "mywebsite.com"; String user = "username"; String password = "123456"; Session session = jsch.getSession(user, host, 22); session.setPassword(password); session.connect(); } catch(Exception e){ System.out.println(e); } } }

    Read the article

  • substitution of someaddress.com on local desktop computer

    - by dev
    Here is VDS server with ip(for example 105.123.123.123) with working apache service. And there is a desktop computer with linux on board(but really I presume there is no difference). I need to type on web browser address like someaddress.com and to see website situated at my server. My /etc/hosts: 127.0.0.1 localhost 105.123.123.123 someaddress.com 105.123.123.123 www.someaddress.com But it doesn't work. I see real someaddress.com website. What can be wrong. It will be great if you help me with that. P.S. Why I need this. There is one project with fixed links(like someaddress.com/inf). And I need to test it.

    Read the article

  • Do I need to use http redirect code 302 or 307?

    - by Iain Fraser
    I am working on a CMS that uses a search facility to output a list of content items. You can use this facility as a search engine, but in this instance I am using it to output the current month's Media Releases from an archive of all Media Releases. The default parameters for these "Data Lists" as they are called, don't allow you to specify "current month" or "current year" for publication date - only "last x days" or "from dateA to dateB". The search facility will accept querystring parameters though, so I intend to code around it like this: Page loads How many days into the current month are we? Do we have a query string that asks for a list including this many days? If no, redirect the client back to this page with the appropriate query-string included. If yes, allow the CMS to process the query Now here's the rub. Suppose the spider from your favourite search engine comes along and tries to index your main Media Releases page. If you were to use a 301 redirect to the default query page, the spider would assume the main page was defunct and choose to add the query page to its index instead of the main page. Now I see that 302 and 307 indicate that a page has been moved temporarily; if I do this, are spiders likely to pop the main page into their index like I want them to? Thanks very much in advance for your help and advice. Kind regards Iain

    Read the article

  • How do I get uri of HTTP packet with winpcap?

    - by Gtker
    Based on this article I can get all incoming packets. /* Callback function invoked by libpcap for every incoming packet */ void packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data) { struct tm *ltime; char timestr[16]; ip_header *ih; udp_header *uh; u_int ip_len; u_short sport,dport; time_t local_tv_sec; /* convert the timestamp to readable format */ local_tv_sec = header->ts.tv_sec; ltime=localtime(&local_tv_sec); strftime( timestr, sizeof timestr, "%H:%M:%S", ltime); /* print timestamp and length of the packet */ printf("%s.%.6d len:%d ", timestr, header->ts.tv_usec, header->len); /* retireve the position of the ip header */ ih = (ip_header *) (pkt_data + 14); //length of ethernet header /* retireve the position of the udp header */ ip_len = (ih->ver_ihl & 0xf) * 4; uh = (udp_header *) ((u_char*)ih + ip_len); /* convert from network byte order to host byte order */ sport = ntohs( uh->sport ); dport = ntohs( uh->dport ); /* print ip addresses and udp ports */ printf("%d.%d.%d.%d.%d -> %d.%d.%d.%d.%d\n", ih->saddr.byte1, ih->saddr.byte2, ih->saddr.byte3, ih->saddr.byte4, sport, ih->daddr.byte1, ih->daddr.byte2, ih->daddr.byte3, ih->daddr.byte4, dport); } But how do I extract URI information in packet_handler?

    Read the article

  • How do I get the size of a response from a Spring 2.5 HTTP remoting call?

    - by aarestad
    I've been poking around the org.springframework.remoting.httpinvoker package in Spring 2.5 trying to find a way to get visibility into the size of the response, but I keep going around in circles. Via another question I saw here, I think what I want to do is get a handle on the InputStream that represents the response from the server, and then wrap it with an Apache commons-io CountingInputStream. What's the best way to go about doing this? For the moment, I'd be happy with just printing the size of the response to stdout, but eventually I want to store it in a well-known location in my app for optional display.

    Read the article

  • Can a http server detect that a client has cancelled their request?

    - by Nick Retallack
    My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening. Is it possible to detect that the connection has been broken, and react to it? In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...

    Read the article

  • http 301 redirect from htaccess to domain host

    - by neilc
    Hi I have the following in a .htaccess file redirect 301 /page.php http://domain.com/page Which works fine and as expected. I want to be able to redirect the following http://domain2.com/page.php to http://domain2.com/page or http://domain3.com/page.php to http://domain3.com/page or http://domain4.com/page.php to http://domain4.com/page So basically whatever the domain name is, I want to redirect to it. But the catch is I want to use a 301 redirect. Is this even possible ? Or should I be using RewriteCond and RewriteRule ?

    Read the article

  • What HTTP headers do I need to stream an ASF file?

    - by SoaperGEM
    I have a simple PHP script that will either serve up a streaming ASF file or not, depending on whether you're logged in and have access to the file. It basically does this: <?php header('Content-Type: video/x-ms-asf'); header('Content-Disposition: inline; filename="file.asf"'); readfile('file.asf'); ?> This already works fine in Firefox; when you navigate to this file it starts the video streaming right away. But Internet Explorer is a different story. When I go to this link in IE, it consistently tries to download the file as if it were an attachment rather than streaming it in the browser. What I am missing that IE's getting hung up on?

    Read the article

  • IIS7 or .Net 301 Redirects from 1 domain to another

    - by RandomBen
    I have 2 domains. For the question, I will call them www.old.com and www.new.com. Both urls are pointing to the same IIS7 Site instance. I need to it up so that when someone goes to www.old.com they get a 301 redirect to www.new.com. The tricky part is I am using URL rewrites for pages within the site. So www.old.com/About.aspx redirects to www.new.com/About. To get that to work with IIS7 URL rewrite rules, it also means that www.new.com/About.aspx redirects to www.new.com/About. That is fine and is not a big deal. My issue is how do I redirect the main domain without losing the URL Rewrites from the sub pages? I don't care if I use a module within IIS7 or if I need to do it in .NET code.

    Read the article

  • asp.net mvc got 405 error on HTTP DELETE request?

    - by DucDigital
    Hi everyone... Im trying to pass the DELETE to a url in asp.net mvc using Javascript but however i always got "405 Method not allow" return. is there anyway to make this work? FYI: i've put the [AcceptVerb(HttpVerb.Delete)] attribute on my controller. DELETE /post/delete/8 this is the request

    Read the article

  • HTTP compression - can I configure a client to compress the data sent to a server?

    - by lgomide
    Hello, I'm using IIS 7 as web server for my application. If I enable dynamic content compression in the server, will this also enable clients to send compressed data to the server, if they can? I mean, my application uses SOAP webservices, and clients usually send large chunks of data to the server. The clients are written in C#/.NET. Is there any kind of configuration I can do in a web reference / serice reference in order to tell them to compress the content before they send it to IIS? And do I have to do any kind of configuration in IIS in order for this to work? Thanks in advance

    Read the article

  • What HTTP headers are required to refresh a page on back button.

    - by cantabilesoftware
    I'm trying to get a page to refresh when navigated to from the back button. From what I understand after reading around a bit I should just need to mark the page as uncacheable but I can't get any browsers to refresh the page. These are the headers I've currently got: Cache-Control:no-cache Connection:keep-alive Content-Encoding:gzip Content-Length:1832 Content-Type:text/html; charset=utf-8 Date:Mon, 07 Jun 2010 14:05:39 GMT Expires:-1 Pragma:no-cache Server:Microsoft-IIS/7.5 Vary:Accept-Encoding Via:1.1 smoothwall:800 (squid/2.7.STABLE6) X-AspNet-Version:2.0.50727 X-AspNetMvc-Version:2.0 X-Cache:MISS from smoothwall X-Powered-By:ASP.NET Why would the browser pull this page from it's browser history and not refresh it?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >