Search Results

Search found 8989 results on 360 pages for 'response'.

Page 278/360 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Windows file association for README, INSTALL, LICENSE and the like [closed]

    - by Lumi
    Possible Duplicate: How to set the default program for opening files without an extension in Windows? Many files originating in the UNIX world come without file extension. Popular examples include README, INSTALL, LICENSE. We know for a fact that these are text files. It is therefore a bit disappointing not to be able to just double-click them open in Explorer and see them in Notepad (actually, Notepad2 because of the UNIX line endings which silly Microsoft Notepad doesn't render correctly). Does anyone know of a way to create a file association for, say, README files without extension? This could then be replicated to cover the most frequently occurring file types, and then double-clicking them open would work. Update (Sort of in response to all your comments.) Thanks, folks, your comments and answers have helped me. @Indrek, yes, I was under the assumption that you could somehow create an association for just README or Makefile, and couldn't do so for files without extension. Turns out the contrary is true, and yes, that is a workaround that neatly solves the issue. Ultimately, I just want to be able to double-click to open a README or Makefile, that's all. @Sampo, the SendMe trick is also useful, although usability is not as great as a straight double-click. (I'm really lazy sometimes.) Turns out the following trick using ftype and ftype from an Administrator prompt does the double-click enabling job: assoc .=no_ext ftype no_ext=%SystemRoot%\system32\NOTEPAD.EXE %1 :: You can see it created some entries in the registry: reg query hkcr\no_ext /s reg query hkcr\. /s

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • SVN checkout returns 400 error

    - by eboix
    I'm trying to download the http://code.opencv.org/svn/opencv/trunk/ repository of all of the OpenCV source code - as specified in an OpenCV installation tutorial. In the tutorial, the repository https://code.ros.org/svn/opencv/trunk/ is used, but they moved it to http://code.opencv.org/svn/opencv/trunk/, and now you need a password to access the code.ros.org repository. Anyway, I'm using TortoiseSVN to download the SVN repository. (I get the same error with http://sourceforge.net/projects/win32svn/) I get this: Checkout from http://code.opencv.org/svn/opencv/trunk, revision HEAD, Fully recursive, Externals included Server sent unexpected return value (400 Bad request. Method Unknown) in response to REPORT request for '/svn/opencv/!svn/vcc/default' On the TortoiseSVN site I found something about this 400 error: You're behind a firewall which blocks DAV requests. Most firewalls do that. Either ask your Administrator to change the firewall, or access the repository with https:// instead of http:// like in https://svn.collab.net/repos/svn/ That way you connect to the repository with SSL encryption, which firewalls can't interfere with (if they don't block the SSL port completely). Also some virus scanners (i.e. Kapersky) are known to interfere and cause this error. The code.ros.org repository is https://, so I would be able to access it, but I need a password, so I can't. I made an account on ros.org, but it seems that I still need a password (which I don't know) to access the code repository. My username-password combination does not work. I unblocked all of the TortoiseSVN programs in my firewall settings. Nothing changed. I temporarily stopped my firewall to see if it was interfering with my request. I got the same error. How can I do an svn checkout http://code.opencv.org/svn/opencv/trunk/opencv/ so that I don't get this error? Is there any way to make it https://? Any help would be appreciated!

    Read the article

  • Redirection of outbound UDP port NTP.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Unable to access site over HTTPS using self signed certificate

    - by James
    I am developing a REST API which I want to secure with SSL/TLS. I have implemented a large part of the API which I have tested over HTTP, however, I am now at the stage where I want to switch it over to use HTTPS. At the moment the API is hosted on a Windows XP professional SP2 box running IIS 5.1 (development environment only) and I used the SelfSSL.exe tool from the IIS 6.0 Resource Kit Tools to generate a server certificate. I then configured my API to use this certificate which all appeared to work fine as I attempted to connect to my API using HTTP and I get a 403 response saying "... must be accessed over a secure channel...". However, the problem is when I attempt to access the same the API over HTTPS it just appears to hang! As this is a development environment at the moment I don't have a domain name (just a static IP address) and the API is running on port 81. Also (incase it matters) the API is the default site (I replaced it). Any ideas why I can't connect using HTTPS?

    Read the article

  • Windows 7 & Photoshop CS5.1 - "Fonts missing" issue - I have the font!! (sort of)

    - by Tigue Von Bond
    I've noticed a really aggravating issue with Adobe Photoshop CS5.1 on at least two occasions. I downloaded a layered PSD file to work with, in the release notes it directed me to a download page for all of the font used, which was Futura Medium Condensed. I chcked and did not have any Futura fonts at all. So I downloaded and installed the font from the source provided by the provider of the PSD. I closed and reopened Photoshop and when I open the PSD file I get an error saying: Some text layers contain fonts that are missing. These layers will need to have the missing fonts replaced before they can be used for vector based output. I then go to edit the text layer and receive: The following fonts are missing for text layer "discount" Future CondensedExtraBold Font substitution will occur. Continue? If I click OK, it substitutes Myriad Pro for this layer. Didn't I download the right font? I go into the font dropdown and see I have a font with a slightly different name "Futura-CondensedExtraBold-Th Regular" I have also seen this issue with Helvetica. I have received a PSD file, same "some text layers contain fonts that are missing These..." error dialog when I open up the file - and when I go to edit a layer with text I get: The following fonts are missing for text layer "Home": Helvetica Font substitution will occur. Continue? I click continue - it substitutes Myriad Pro - and check my font list and sure enough I have a bunch of Helvetica fonts, none exactly named "Helvetica" Is this a common issue? Googling it yielded a few people with similar problems (I think all on Macs) but either no concrete help or no response. Is it that the two font names aren't EXACT matches? If that is the case is there any way of setting up Photoshop to more intelligently substitute or even set up some sort of mapping (if "Helvetica" then substitute "Helvetica Lt Std" ? Is there anything else, maybe something that I am not thinking of?

    Read the article

  • SSH Interactive mode not working

    - by Ekin Koc
    I have a Debian based linux server running for a year or so, without any problems. A couple of days ago, ssh interactive mode stopped working for no reason. I mean, I can open an ssh connection just fine, the server greets me with shell but I just can't type anything. However, if I send commands like this: ssh [email protected] cat /var/log/messages, I get the response. I dug through several logs and found one message, which feels remotely relevant to the problem; sh kernel: [10222733.062511] ------------[ cut here ]------------ sh kernel: [10222733.062522] WARNING: at /build/buildd-linux-2.6_2.6.32-39-amd64-7yVIH2/linux-2.6-2.6.32/debian/build/source_amd64_none/drivers/char/tty_ldisc.c:738 tty_ldisc_reinit+0x46/0x7b() sh kernel: [10222733.062526] Hardware name: PowerEdge R210 II sh kernel: [10222733.062528] Modules linked in: ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables x_tables sha1_generic arc4 ecb ppp_mppe ppp_async crc_ccitt ppp_generic slhc loop snd_pcm snd_timer snd soundcore snd_page_alloc i2c_i801 i2c_core pcspkr evdev joydev dcdbas container button processor ext3 jbd mbcache sg sd_mod sr_mod crc_t10dif cdrom usb_storage usbhid hid mpt2sas ahci ehci_hcd libata scsi_transport_sas usbcore bnx2 nls_base scsi_mod fan thermal thermal_sys [last unloaded: scsi_wait_scan] sh kernel: [10222733.062568] Pid: 8662, comm: sshd Not tainted 2.6.32-5-amd64 #1 sh kernel: [10222733.062569] Call Trace: sh kernel: [10222733.062572] [<ffffffff811ff056>] ? tty_ldisc_reinit+0x46/0x7b sh kernel: [10222733.062574] [<ffffffff811ff056>] ? tty_ldisc_reinit+0x46/0x7b Is there any way to get back the sshd working in interactive mode? I tried restarting sshd but that is no help. And somehow, I can not reboot the server. Tried sending shutdown -r now and reboot but it refuses to go down. Should I go ahead and request a physical reboot?

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • How to enable IPv6 glue records (AAAA) in PowerDNS

    - by aef
    I'm running a PowerDNS 3.1 on a Debian Wheezy Beta 4 system. The zone data is accessed through a PostgreSQL database, the server answers to both IPv4 and IPv6 queries. If the DNS-Server knows the A record for one of the name servers referenced by NS records on a zone, it automatically return these A records as additional information to the response on an NS query for that zone. Now even if it knows the AAAA record for one of the name servers of the NS records, it currently does never return an AAAA record as additional information. How can I enable this? Or is there anything I could be doing wrong? Output of dig @ns.mydomain.tld NS mydomain.tld: ;; QUESTION SECTION: ;mydomain.tld. IN NS ;; ANSWER SECTION: mydomain.tld. 86400 IN NS ns3.nsprovider.de. mydomain.tld. 86400 IN NS ns2.nsprovider.de. mydomain.tld. 86400 IN NS ns.mydomain.tld. mydomain.tld. 86400 IN NS ns.nsprovider.de. ;; ADDITIONAL SECTION: ns2.nsprovider.de. 86400 IN A 1.2.3.1 ns.nsprovider.de. 86400 IN A 1.2.3.2 ns.mydomain.tld. 600 IN A 192.0.2.194 ns3.nsprovider.de. 86400 IN A 1.2.3.3 Output of dig @ns.mydomain.tld A ns.mydomain.tld: ;; QUESTION SECTION: ;ns.mydomain.tld. IN A ;; ANSWER SECTION: ns.mydomain.tld. 600 IN A 192.0.2.194 Output of dig @ns.mydomain.tld AAAA ns.mydomain.tld: ;; QUESTION SECTION: ;ns.mydomain.tld. IN AAAA ;; ANSWER SECTION: ns.mydomain.tld. 86400 IN AAAA 2001:db8:100:3022:1::3

    Read the article

  • Installing and running two postgresql versions on different ports (or two instances of same server)

    - by Andrius
    I have postgresql 9.1 installed on my machine (Ubuntu). I need another postgresql server that would run next to the old one. Exact version does not matter, but I'm thinking of using 9.2 version. How could I properly install and run another postgresql version without screwing old one (like upgrading). So those versions would run independently on different ports. Old one on 5432 and new one on 5433 for example. The reason I need this is for two OpenERP versions databases. If I run two OpenERP servers (with different versions) on single postgresql port, it crashes because new OpenERP version detects old versions database and tries to run it, but it crashes because it uses another schemes. P.S or maybe I could just run same postgresql server on two ports? Update So far I tried this: /usr/lib/postgresql/9.1/bin/pg_ctl initdb -D main2 It created new cluster. I changed port to 5433 in new clusters directory postgresql.conf file. Then ran this: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I got response server starting. But when I tried to enter new cluster's template database with: psql template1 -p 5433 I got this error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"? Also now when I try to stop server with: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I get this error: pg_ctl: PID file "main2/postmaster.pid" does not exist Is server running? So I don't understand if server is running and what I'm missing here? Update Found what was wrong. Stupid me. I didn't notice that when I changed port in .conf file, that line was commented already. So actually I didn't change anything first time, but thought I did and it used default 5432 port.

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • Problem with Outlook 2010 (SMTP AUTH LOGIN)

    - by Filipe YaBa Polido
    I have to connect one customer Outlook 2010 to a remote server on which I have either no right, neither way to talk to the sysadmin. This is the thing, after installing and reviewing the logs on Wireshark: Outlook Express: HELO machine AUTH LOGIN username base64 encoded password base64 encoded mails go through. Outlook 2010: HELO machine AUTH DIGEST-MD5 response from server Outlook sends just a * AUTH LOGIN password base64 encoded So... I can send mails in the same domain, but can't send outside, it gives me a relay denied message. My point is... Why the h**l Outlook 2010 doesn't send the username AND the password?! It can never login the right way :| With other versions of Outlook it works fine, and with OE works great, it auths and allows to send mail to a different domain. I've googled and nothing worked. I'm pretty sure that I'm not alone with this one. My last resort will be to configure a local proxy/server that relays to the original one :| Any help would be appreciated. Sorry my bad english as is not my natural language. Thanks.

    Read the article

  • Simple way to set up port knocking on Linux?

    - by Ace Paus
    There are well known benefits of Port Knocking utilities when utilized in combination with firewall IP table modification. Port Knocking is best used to provide an additional layer of security over other tools such as the OpenSSH server. I would like some help setting it up on a ubuntu server. I looked at some port knocking implementations here: PORTKNOCKING - A system for stealthy authentication across closed ports. IMPLEMENTATIONS http://www.portknocking.org/view/implementations fwknop looked good. I found an Android client here. And fwknop (both client and server) is in the ubuntu repos. Unfortunately, setting it up (on the server) looks difficult. I do not have iptables set up. My proficiency with iptables is limited (but I understand the basics). I'm looking for a series of simple steps to set it up. I only want to open the SSH port in response to a valid knock. Alternatively, I would consider other port knocking implementations, if they are much simpler to set up and the desired Linux and Android clients are available.

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • Raid 5 with hot spare or RAID 10 with no hot spare?

    - by Boden
    Yes, this is on of those "do my job for me" questions, have some pity:) I'm at the limit for what I can do with the number of hard drives in a server without spending a substantial amount of money. I have four drives left to configure, and I can either set them up as a RAID 5 and dedicate a hot spare, or a RAID 10 with no hot spare. The size of each will be the same, and the RAID 5 will offer enough performance. I'm RAID 5 shy, but I also don't like the idea of running without a hot spare. I'm not so interested in degraded performance, but the amount of time the system is without adequate redundancy. The server and drives are under a 13x5 4 hour response contract (although I happen to know that the nearest service provider is at least 2-3 hours away by car in the winter). I should note that the server also has two RAID 1 arrays which would also be protected by the hot spare. Why don't they make drive cages with 9 bays! Heh.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

  • server dosnt produce syn-ack

    - by steve
    I have a small program that take packets from the nfqueue . change the ip.dst to my server dst (and ttl), recalc checksum and return the packet to the nfqueue. The server and the client are linux and apache web server is run on the server and listen on port 80. i open telnet in the client to fake ip on port 80 . the packet is changed by my program and sent to the server, but the target server (the new dst ip) get the syn , but dosnt generate syn-ack (the server also belong to me , so i can see that it get the syn with checksum correct , but dosnt generate syn-ack). if i do the same , but with the real server ip as the dest, the tcp handshake is done correct (in this case i just change the ttl and checksum. The change that i did to the ttl is just a test to see that my checksum calc is ok). i compare the sys's , but didnt find and difference. Any idea? Ps. i saw this topic : Server not sending a SYN/ACK packet in response to a SYN packet and i set all flags the same , but this didnt help. Thank you

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >