Search Results

Search found 37539 results on 1502 pages for 'google friend connect'.

Page 512/1502 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.123... * connected * Connected to api.paypal.com (66.211.168.123) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • Failed to su after making a chroot jail

    - by arepo21
    On a 64 bit CentOS host I am using script make_chroot_jail.sh to put a user in a jail, not permitting it to see anything expect it's home at /home/jail/home/user1. I did it typing this: sudo ./make_chroot_jail.sh user1 after, when trying to connect to user1 first i was getting an error like: /bin/su: user guest does not exist i have fixed this by copying some missed libraries: sudo cp /lib64/libnss_compat.so.2 /lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /lib64/libxcrypt.so.2 /home/jail/lib64/ sudo cp -r /lib64/security/ /home/jail/lib64/ But now, when trying to connect to user1 typing su user1 and then typing it's password, i am getting this error: could not open session So the question is how to connect to user1 in this situation? P.S. Here are the permissions of some files, this might be helpful in order to provide a solution: -rwsr-xr-x 1 root root /home/jail/bin/su drwxr-xr-x 4 root root /home/jail/etc -rw-r--r-- 1 root root /home/jail/etc/pam.d/su -rw-r--r-- 1 root root /home/jail/etc/passwd -rw------- 1 root root /home/jail/etc/shadow UPDATE1 After some modifications i managed to connect to user1, but the session closes immediately! I guess this a PAM issue, however cant find a way to fix it. Here the log entry for close action from /val/log/secure: Oct 6 15:19:42 localhost su: pam_unix(su:session): session closed for user user1 What makes the session to exit immediately after launching?

    Read the article

  • One network, two macbooks, one is fast and the other is slow

    - by Brendan
    I really need help for my friend. I know next to nothing about computers. My roommate and I both have macbook pros from the same year running OS X, are both connecting wirelessly to the same xfinity wifi, and while mine runs perfectly fine, my roommate complains that his works very slowly and times out every few seconds. I can't seem to figure out why this is. He is trying to get me to switch internet providers because he is convinced that it is their problem, but this cannot possibly be the issue since it works great on mine. He has an xbox hooked up to the wifi that he says also works poorly. I really can't see switching providers given that I am experiencing absolutely zero problems. How can I help my friend?

    Read the article

  • Linux or Windows for a server?

    - by Matt
    I'm a Linux guy when it comes to (web) servers for the following reasons Legally free Fast software updates (Unless you're running Cent OS :) Powerful CLI management of services Easy to secure (in terms of users and groups) Web server software is, well, built for Linux... Apache, PHP, Python, etc, are Linux programs that get ported to Windows - I'm 90% sure of this Unless the web server needed to run ASP, I wouldn't use Windows. My boss' IT friend is a Windows guy, though. He recently got a server setup in the office to run Microsoft Exchange and some other shit. What I'm asking is, if he wanted to start running websites on this thing, what would be good reasoning to convince him otherwise? He's not very bright in terms of IT and the IT friend is all Windows. So it's two against one here... What would you say to running a Windows web server?

    Read the article

  • PCMCIA preventing laptop from booting up

    - by Pongus
    My friend has brought his dead laptop over. It is a Sony Vaio with Windows Vista. When booting up, it blue screens and reboots. I have tried to boot up with Knoppix and this will freeze when starting up PCMCIA. If I start Knoppix with knoppix nopcmcia it disables pcmcia and boots up fine. So I presume there is a problem with the PCMCIA controller on the laptop. My friend does not use any PCMCIA cards, so doesnt really need it. Is there anyway I can similarly disable PCMCIA so that Windows will boot up fine?

    Read the article

  • Troubles with MSVC++ redistributable

    - by Karolis Juodele
    A friend of mine is trying to install AutoCAD. In the install log it is written Install Microsoft Visual C++ 2005 Redistributable (x86) Failed Installation aborted, Result=1603 I've found a site which (hopefully) explains how to solve this: here However, my friend has not been able to uninstall the redistributables. The error message says The feature you are trying to use is on a network resource that is unavailable I've already suggested running this but that didn't help much. Only some of the redistributables could be uninstalled. What can be done? Do you think running some other cleanup tool could work? Or will we have to uninstall manually (how exactly would we do that)?

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • 10Gbe sfp+ Cross Over Cable required? Is there such a thing?

    - by dc-patos
    To preface, this is my first experience with 10GBe networking and I have encountered an issue which research does not seem to document a solution for... I have two servers (older DL580G5 and DL380G5), each with a HP NC522SFP 10Gbe dual sfp+ port adapter. I have purchased copper "passive" direct connect adapter cables (which look like twinax), which seem to work well when I connect them to the sfp+ ports on my Dell 5524 switch. However, if I directly connect the two servers with the same cable, the link doesn't come up. I am running WS2012 standard on each server. My intention is to use one of these servers as a home brew SAN and I would like to enable mutiple 10Gbe paths for iSCSI traffic. My question(s): Can I connect the two adapters to each other, such as I would with other less speedy generations of ethernet? If I can, do I require a crossover cable, or some type of other sfp+ cable solution to do this? My 10Gbe sfp+ switch ports are premium, but server to server connections are doable in small numbers for me and I would really like the multiple paths this would give me. Is there a simple solution?

    Read the article

  • Windows Remote Desktop: "configuring remote session" closes without error

    - by icelava
    I have a desktop/laptop pair at home operating x64 Windows 7 (the desktop was upgraded from Windows Vista, works just fine). I remote desktop to them on a daily basis when outside. In recent weeks, I would occasionally fail to connect to my desktop. It can connect and authenticate fine, but the "configuring remote session" dialog would simply close and not show me the desktop window or any error message. There is no error event log relating to this on the desktop computer. Some suggestions call for disabling remote audio, which mine already is, but trying different audio modes did not yield any different result. I am not too sure if this is related to video card drivers (they do get auto-updated), since remote desktop video is supposed to steer via a virtual device driver? Nonetheless the desktop operates three monitors via an ATI Radeon HD5770 (1 Displayport, 2 DVI). I do not see a real problem with that since I can mostly connect and operate it remotely. I try to "remote tunnel" via my home laptop but obviously won't work either as the problem lies in the desktop. What other conditions can cause remote desktop to break without error? UPDATE I came home and still couldn't connect to the desktop until I restarted the entire system.

    Read the article

  • SQUID proxy - open FTP (and other ports)

    - by gaffcz
    elpeHow can I open other ports than HTTP and HTTPS using SQUID proxy? I have last version of squid running on Fedora 10 but I'm not able to open FTP port. part of my squid.conf: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl ftp proto FTP acl ftp_port port 21 always_direct allow FTP acl SSL_ports port 443 20 21 22 acl Safe_ports port 20 # ftp acl Safe_ports port 21 # ftp acl Safe_ports port 22 # sftp acl Safe_ports port 80 # http acl Safe_ports port 280 # http-mgmt acl Safe_ports port 443 # https acl Safe_ports port 1025-65535 # uregistred ports acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager # USER privilegies (encoded in file passwd) auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd acl AUTHUSERS proxy_auth REQUIRED # BLACKLIST (in file denied.conf) acl denied_domains dstdomain "/etc/squid/DNDdomains.conf" acl denied_regex url_regex "/etc/squid/DNDregex.conf" http_access deny denied_regex http_access deny denied_domains http_access allow AUTHUSERS http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow ftp_port CONNECT http_access allow ftp http_access allow localhost http_access deny all #http_reply_access allow all #http_access allow all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 10000 16 256 coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 I've tried to add: acl ftp proto FTP / acl ftp_port port 21 http_access allow ftp add/remove ports 20,21 from SSL_PORTS list set the iptables But nothing helped. It is even possible to use a new version of squid for FTP transfer?

    Read the article

  • Ubuntu 9.10 is not starting on netbook

    - by anonymous
    I installed Ubuntu 9.10 onto my external hard drive cause it's cool to be able to borrow a friend's laptop and be able to have my entire system. It works on the 2 systems i tested it on: my desktop and my mom's laptop. I had to work on something earlier so i borrowed my friend's netbook. I started it up, chose Ubuntu 9.10.20 and it got to the Ubuntu loading screen with the 3 people holding hands right before user selection then it suddenly went black. Naturally, i freaked out because it wasn't my laptop. I held the power button down and reset the netbook but the screen was still black, it didn't even show the BIOS. I repeated the process without my hard drive, and it was still black without the BIOS showing up. I had to remove the battery, plug it to a power source, and power up to start the netbook up again. Can anyone tell me what happened?

    Read the article

  • DNS stops working occasionally

    - by Andrey
    I have tried using Google DNS and the one provided with DHCP. At some point my PC (Windows 7) stops resolving domain names, but DNS server is perfectly pingable. What can be the reason? Thanks! Edit: It is really weird. It can stop and start working in few seconds. The problem is that DNA requests are timing out, and the problem is that the DNS server is pingable at the same time. I can't understand how this could be possible and what might be an issue. C:\Windows\System32\drivers\etc>nslookup google.com DNS request timed out. timeout was 2 seconds. Server: UnKnown Address: 8.8.8.8 Non-authoritative answer: Name: google.com Addresses: 173.194.78.102 173.194.78.101 173.194.78.139 173.194.78.113 173.194.78.100 173.194.78.138 C:\Windows\System32\drivers\etc>nslookup google.com DNS request timed out. timeout was 2 seconds. Server: UnKnown Address: 8.8.8.8 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to UnKnown timed-out

    Read the article

  • I think installing PostFix solved my problem, but it seemed *too* easy

    - by Joel Marcey
    Hi, This is a followup to a serverfault post I made a while ago: http://serverfault.com/questions/21633/how-do-i-target-a-different-mail-server-depending-on-domain-with-exim (More context here too: http://forum.slicehost.com/comments.php?DiscussionID=3806 ) I have a slice/VPS at Slicehost. I basically decided to scrap exim (i.e., purge it), and start anew with my email infrastructure. In case you didn't read any of the above threads, basically my goal was to have a send only mail infrastructure that relays all outgoing email to Google Apps. I also wanted to where email from domain1 (a Wordpress installation) would show it coming from domain1.com and email from domain2 (a normal website) would show it coming from domain2.com. So I decided to give PostFix a try. I literally followed the surprisingly simple instructions here: http://sudhanshuraheja.com/2009/02/slicehost-setup-outgoing-mail-google-apps-postfix/ And voila, all seems to be working as I expected. My email tests show email coming from the proper locations (either domain1 or domain2 depending on where the emails were sent from). But this all seemed too simple to me. So simple, in fact, that I feel that something is amiss. When I installed PostFix according to the instructions in the post above and it worked, I was surprised that I didn't have to specify an SMTP server, a port number, any authentication credentials, etc. My slice is set up such that I have MX records for Google Apps (e.g., ASPMX.L.GOOGLE.COM.) in my DNS settings, but I am not sure if that is why it is working. My email infrastructure knowledge is admittedly limited, but with this am I suspect to: Spammers using my email infrastructure? My emails going to people as spam? Something else sinister? I have actually stopped running PostFix until I understand this better. Thanks!

    Read the article

  • PC doesn't boot PC-BSD from USB

    - by turlando
    I've got a problem with a friend's PC: I'm installing a FreeBSD server and to make easier the installation for my friend I'm using the PC-BSD DVD. Surprise! The CD reader doesn't read DVDs, so I'm using a USB stick to perform the install. The PC seems supporting USB boot because I can choose it in the boot sequence, but the PC-BSD installation doesn't start, booting the OS installed in the primary HD. I have not physic access to the PC and I can't have at the moment more informations. What do you think about? Thanks and sorry for my terrible English. Tancredi Orlando.

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • How to configure networking on an appliance such that it can plug and play on any corporate network?

    - by Joshua Lim
    I had a chance to configure a Moxa NPort device server appliance on my client's network, it was very easy to do so, done in just 2 minutes. Here's what I did:- The Moxa device server had a preset IP address of 192.168.127.254 and subnet mask 255.255.255.0 - http://www.moxa.com/doc/manual/nport/5400/NPort_5400_Series_Users_Manual_v4.pdf Moxa provides a Windows software which I used to "scan" for the device server. It worked like magic! The software returns a list of device servers found. Each device server is identified by MAC address, and by selecting the device server using the software, I can reset the default IP address and subnet mask of that device server! In comparison, during an earlier project, I spent 2 hours trying to get KVM to work for a Windows 7 embedded appliance I'm trying to install in my client's network - http://superuser.com/questions/380305/how-to-configure-windows-7-professional-appliance-pc-on-my-clients-network-usin Prior to that, I have already tried pre-configuring the IP address and subnet mask to the one which my client provided, yet the appliance still can't connect to the client's network! I've also tried cross cable, didn't work either. After KVM worked, I discovered that the network settings were "lost" after I plug the machine into the client's network. Now my question is what can I do to setup my Windows 7 embedded appliance so that it can connect to any network like that the Moxa device server? I tried experimenting this on my network using a Windows machine configured to an IP address of 192.168.127.254 and subnet mask 255.255.255.0, but it doesn't connect to my network that uses 192.168.0.*. :( EDIT: I would like to point out that the Moxa Windows configuration software seems to be able to connect to any Moxa device connected to the network even if it is on a different subnet, as long as the network adapter shows "connected". This is important because the Moxa device has no VGSM port or interface to configure the IP address.

    Read the article

  • Multisession burn in Imgburn

    - by blntechie
    Is Multisession burn available in Imgburn? If not, any idea whether it will be implemented in future? I almost recommended Imgburn instead of Nero or Roxio to one of my friend. He requires multisession burning and I found no options to enable it,if available in Options. Note: Please don't question the question. Like, Why would you want multisession anyway? or Isn't USB stick/RW Disk is what you need instead of a RO CD/DVD? Please keep the answers in context. I know that I can use USB sticks instead of CD/DVD and my friend require mulisession anyway. May be I can ask him to keep Nero as a backup for this purpose if Imgburn don't support this.

    Read the article

  • Mouse receiver stopped working after pairing mouse to an unifying receiver

    - by mp19uy
    I bought this mouse, logitech m510, which came with a nano receiver (no-unifying). Yes I know, the mouse is supposed to come with a unifying receiver but I bought it knowing that it won't come on it's original package and it will come with a no-unifying receiver. When I received it, everything worked ok but, after connecting the mouse to another computer using an unifying receiver (that also worked ok, btw), then, when I tried to connect back the mouse to my computer using the no-unifying receiver, I couldn't connect it. I tried everything from removing the batteries and reinstalling the drivers to restarting the computer and trying in different computers, but I couldn't connect them. What I think it happened, is that in fact if you check the documentation of the logitech m510 it says that it works with unifying receivers only, and even more, there is the following article explaining it: http://logitech-en-amr.custhelp.com/app/answers/detail/a_id/18001/~/using-my-m510-with-a-different-usb-receiver So my theory is that the problem was connecting it to a unifying receiver, and now, isn't recognize by another receiver. The receiver (the no-unifying one) itself is recognized by windows, and if I connect the mouse using a unifying receiver, it works. I would like to know if there is any know solution for this or if I can try something else to see if I can solve this.

    Read the article

  • Why is my Drupal Registration email considered spam by gmail? (headers included)

    - by Jasper
    I just created a Drupal website on a uni.cc subdomain that is brand-new also (it has barely had the 24 hours to propagate). However, when signing up for a test account, the confirmation email was marked as spam by gmail. Below are the headers of the email, which may provide some clues. Delivered-To: *my_email*@gmail.com Received: by 10.213.20.84 with SMTP id e20cs81420ebb; Mon, 19 Apr 2010 08:07:33 -0700 (PDT) Received: by 10.115.65.19 with SMTP id s19mr3930949wak.203.1271689651710; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Return-Path: <[email protected]> Received: from bat.unixbsd.info (bat.unixbsd.info [208.87.242.79]) by mx.google.com with ESMTP id 12si14637941iwn.9.2010.04.19.08.07.31; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) client-ip=208.87.242.79; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) [email protected] Received: from nobody by bat.unixbsd.info with local (Exim 4.69) (envelope-from <[email protected]>) id 1O3sZP-0004mH-Ra for *my_email*@gmail.com; Mon, 19 Apr 2010 08:07:32 -0700 To: *my_email*@gmail.com Subject: Account details for Test at YuGiOh Rebirth MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; delsp=yes Content-Transfer-Encoding: 8Bit X-Mailer: Drupal Errors-To: info -A T- yugiohrebirth.uni.cc From: info -A T- yugiohrebirth.uni.cc Message-Id: <[email protected]> Date: Mon, 19 Apr 2010 08:07:31 -0700 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - bat.unixbsd.info X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [99 500] / [47 12] X-AntiAbuse: Sender Address Domain - bat.unixbsd.info X-Source: X-Source-Args: /usr/local/apache/bin/httpd -DSSL X-Source-Dir: gmh.ugtech.net:/public_html/YuGiOhRebirth

    Read the article

  • port forwarding with socks over proxy

    - by Oz123
    I am trying to browse a wiki that runs on a server inside one domain from another domain. The wiki is accessible only on the LAN, but I need to browse it from another LAN to which I connect with an SSH tunnel ... Here is my setup and the steps I did so far: ~.ssh/confing on wikihost: Host gateway User kisteuser Port 443 Hostname gateway.companydomain.com ProxyCommand /home/myuser/bin/ssh-https-tunnel %h %p # ssh-https-tunnel: # http://ttcplinux.sourceforge.net/tools/stunnel Protocol 2 IdentityFile ~/.ssh/key_dsa LocalForward 11069 localhost:11069 Host server1 User kisteuser Hostname localhost Port 11069 LocalForward 8022 server1:22 LocalForward 17001 server1:7100 LocalForward 8080 www-proxy:3128 RemoteForward 11069 localhost:22 from wikihost myuser@wikihost: ssh -XC -t gateway.companydomain.com ssh -L11069:localhost:22 server1 on another terminal: ssh gateway.companydomain.com Now, on my companydomain I would like to start firefox and browse the wiki on wikihost. I did: [email protected] ~ $ ssh gateway Have a lot of fun... kisteuser@gateway ~ $ ssh -D 8383 localhost user@localhost's password: user@wikiserver:~> My .ssh/config on that side looks like that: host server1 localforward 11069 localhost:11069 host localhost user myuser port 11069 host wikiserver forwardagent yes user myuser port 11069 hostname localhost Now, I started firefox on the server called gateway, and edited the proxy settings to use SOCKSv5, specifying that the proxy should be gateway and use the port 8383... kisteuser@gateway ~ $ LANG=C firefox -P --no-remote And, now I get the following error popping in the Terminal of wikiserver: myuser@wikiserver:~> channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused Confused? Me too ... Please help me understand how to properly build the tunnels and browse the wiki over SOCKS protocol. update: I managed to browse the wiki on wikiserver with the following changes: host wikiserver forwardagent yes user myuser port 11069 hostname localhost localforward 8339 localhost:8443 Now when I ssh gateway I launch Firefox and go to localhost:8339 and I hit the start page of the wiki, which is served on Port 8443. Now I ask myself is SOCKS really needed? Can someone elaborate on that ?

    Read the article

  • Router block some sites

    - by Mahesha999
    Hi I was using ADSL Modem/Router earlier. The device is quite old Pronet PN-ADSL 101 E/U model (pics: http://bit.ly/P2YaWy, http://bit.ly/OA700l) Since it had only one RJ45 out, I bought new Wireless Router TPLink TL-WR941ND. It has 4 RJ45 out and 3 wireless antennas. I configured my old router in bridge. Now, if I have to connect my pc to Internet through the old router, I have to enter username and password. Then I connected the RJ45 output of old router to the WAN in of new router. and ran the CD of new router. It configured the new router in PPPoE by saving the username and password in router to dial automatically. So now I have to just plug in the wires in my new routers any RJ45 out. I am able to access the Internet when I connect through new router (both wired and wirelessly), but some sites are getting blocked. Most notably yahoo.com (though ymail.com is working), Microsoft.com. msn.com. These sites work perfectly fine when I connect my pc directly to my old router and enter username and password manually. (However others like google.com. facebook.com works fine when connect through new router) So here these some sites need some parameter set but I am unable to find them out. Can anyone help me. My friend said he also faced same problem. Surprisingly he advised me to see if the same websites will work through Opera turbo mode and boom they worked. So what could be the problem?

    Read the article

  • FTP restrict user access to a specific folder

    - by Mahdi Ghiasi
    I have created a FTP Site inside IIS 7.5 panel. Now I have access to whole site using administrator username and password. Now, I want to let my friend access a specific folder of that FTP site. (for example, this path: \some\folder\accessible\) I can't create a whole new FTP Site for this purpose, since it says the port is being used by another website. How to create an account for my friend to have access to just an specific folder? P.S: I have read about User Isolation feature of IIS 7.5, but I couldn't find how to create a user just for FTP and set it to a custom path.

    Read the article

  • sendmail on Snow Leopard

    - by Jay
    I'm trying to get sendmail working on my MacBook Pro (OS 10.6.4), so that I can send mail with PHP's mail() function. If you know how to do this without sendmail, I'd be interested in that also. The plan is to send mail through smtp.gmail.com using my gmail account, unless you have a better idea. I did this and that didn't work. In /etc/postfix/smtp_sasl_passwords I tried both:     smtp.yourisp.com username:password and     smtp.yourisp.com [email protected]:password The problem seems to be that google doesn't like me. I don't think my ISP is blocking it because Mail.app can send email through smtp.gmail.com just fine. $email is my gmail address. $ printf "Subject: TestMail" | sendmail -f $email $email $ tail /var/log/mail.log Oct 21 19:38:18 Jays-MacBook-Pro postfix/master[8741]: daemon started -- version 2.5.5, configuration /etc/postfix Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: CAACBFA905: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/pickup[8742]: C2A68FA93A: uid=501 from=<$email> Oct 21 19:38:18 Jays-MacBook-Pro postfix/cleanup[8744]: C2A68FA93A: message-id=<20101021233818.$mydomain> Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: C2A68FA93A: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8746]: initializing the client-side TLS engine Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8748]: initializing the client-side TLS engine Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: CAACBFA905: to=<$email>, relay=none, delay=1334, delays=1304/0.04/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: C2A68FA93A: to=<$email>, relay=none, delay=30, delays=0.08/0.05/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) $ I also tried setting myhostname, mydomain, and myorigin in /etc/postfix/main.cf to $ nslookup myip (as displayed by http://www.whatismyip.com/) And still no luck. Any ideas?

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >