Search Results

Search found 11042 results on 442 pages for 'side'.

Page 335/442 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • ports only available from the outside network

    - by ChrisJ
    This is a counter-intuitive problem for me. I have a new Win 2003 server on a static IP address w.x.y.z. Tomcat 7, PostgreSQL 9.1, and Subversion are installed. All of it appears to be working fine from the server itself. We can also access the Tomcat manager, web applications, and run "svn ls svn://w.x.y.z/" from outside our network. However, when I try from another machine in the office, phpPgAdmin and svn cannot establish connections with the server. http://w.x.y.z:5432/phppgadmin cannot connect. The svn command from above returns: svn: E730061: Unable to connect to a repository at URL 'svn://w.x.y.z/' svn: E730061: Can't connect to host 'w.x.y.z': No connection could be made because the target machine actively refused it. Tomcat manager and the other web apps we have deployed work fine. Netstat -a from the server shows this: Proto Local Address Foreign Address State TCP SERVERNAME:3690 SERVERNAME:0 LISTENING TCP SERVERNAME:5432 SERVERNAME:0 LISTENING Windows Firewall was off, but just in case I also tried to enable it and open ports 3690 (svn) and 5432 (postgres). No change. I don't have access to the router/switch because it just doesn't work that way in Port-au-Prince and our sysadmin is on R&R. Is there anything that might be causing the problem from the server side?

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • Intermittent 5.7.1 email bounce to Exchange 2007

    - by Steve Kennaird
    My knowledge of Exchange isn't particularly great, so excuse me if some of the terminology I use isn't quite right. I'm primarily a web developer who's now responsible for a small business's network. We have a server running SBS 2008 and Exchange 2007. Generally, everything works well, emails are able to be sent to both internal and external domains without issue. We've only got ~20 users, Exchange is sitting on a single server. I use SendGrid to send emails generated by our externally hosted website to users in the office. Primarily, order notifications are sent to [email protected]. Without any pattern and less than once per week on average, an email to [email protected] will bounce back, and the logs on SendGrid detail the following error: 550 5.7.1 Unable to relay for [email protected] Either side of that failed delivery attempt, I'm able to send and receive emails to/from [email protected]. Having done some research, incorrect reverse DNS seems like it could be a cause of intermittent bounces like this. Having used nslookup, I have found that the reverse DNS doesn't map like it should, e.g. Office IP: 135.325.351.123 (made up IP, for example only) Domain: office.somedomain.com (made up, for example only) Reverse DNS: somedomain.gotadsl.co.uk (half made up) Could this be a cause? I'm sure that the IP address and the domain should map to each other. Also, it has been suggested to me that as the Exchange server is on a network with an ADSL connection, that could be a potential cause as the connection "goes up and down all day long". I don't have an opinion on this, as I don't have enough knowledge of Exchange/ADSL to form a reliable opinion. Can anyone offer any insight as to whether one or both are actually potential causes, or if there is another possible cause?

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • Compaq motherboard CQ60 AMD - nvidia chipsed graphic problem

    - by Dritan
    Hi! It nice to have read that you solved this problem this way. I have 2 laptops Compaq CQ60 AMD Athlon with Nvidia graphic cards. the first one is new, when i press power button, it lights up only the ON led in front and nothing else, no fan working, blank screen, no beep.. I don't know what may be the problem. When I put on power adaptor, it lights up only the side power led near dhe power adapter plug but it doesn't light up the front led one. the second one have this problem that it spins the fan, light power and On led, but it doesn't show nothing on the screen blank (even with external monitor). In this case it maybe this problem of the Nvida Graphic Chip and it may need a reflow. I have an hot air station, but I don't know if I should try this or the oven one. Please can you give me any suggestion what to do to solve this. I have read that the solution of the Oven method is just temporary,maximum of three months, do you have the same experience about this? Any suggestion is wellcome.

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • A tale of two user ids: Why does NFS not recognize a new user id?

    - by user76177
    I have two servers running RHEL6. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's id on client is 502. appuser's id on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed id of user in /etc/passwd on client to be 506 **Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client or done anything else yet.

    Read the article

  • Including email, IMs, configs, etc. in documentation or notes

    - by Jason Antman
    The shop I work in is pretty laid-back. We're on a documentation kick, only because historically we've been very bad with it. We do a lot of our brainstorming in face-to-face meetings, and also do a lot of communication via IM in addition to email. While I'm usually pretty good about documentation and keeping copious lab notes, I just finished a build of a host and spent hours searching through IMs, emails, files on my workstation, etc. to pull out anything I missed in my lab notes, which formed a large amount of the basis for the internal documentation. Does anyone have any thoughts on, aside from manually saving things to a project directory, managing various data sources (especially email and IM) and tracking them on project basis? Ideally, I'd like an easy way to put copies of emails, IM logs, etc. into a project-specific directory on my workstation and then just have a cron job that syncs that up with a shared folder. This isn't really a candidate for anything more advanced, as the bulk of the data will be copies of configs, code, etc. Here are the big restrictions: Email is via a centralized Zimbra install, so nothing can happen server-side. My workstation is Linux. Aside from writing Pidgin and Thunderbird plugins that let me tag chats and emails as belonging to a project, and then copy them to the appropriate place... any thoughts? Suggestions? Thanks, Jason

    Read the article

  • Dovecot (on Mac OS X Server 3 - Mavericks) giving “Error: Failed to autocreate mailbox INBOX: Permission denied”

    - by user2965240
    I recently upgraded my server to Mavericks and I thought everything had gone perfectly. As it turns out, that was not exactly the case. After everything working fine for a week or so, I created a new user and my mail server flat out stopped working. Most frustratingly, there were no log errors (in any of the many mail logs) which shed any light on the cause. After much trial and error, I finally got my mail server functioning (partially) by reinstalling server and restoring the mail folder from the previous day's backup (yay for backups). However, as is often the case, there were MANY permission issues. After slogging through various permission problems, I believe that my system is now receiving mail, however none of my users can check it because every time someone attempts to login the server generates following error: Error: Failed to autocreate mailbox INBOX: Permission denied I would assume that this is yet another permissions problem, however after much searching, I still cannot resolve this. To be clear, I don't want Dovecot to autocreate any mailboxes at this point because all of the mailboxes should exist. Any help on this would be sooooooo greatly appreciated. P.S. On a side note: why is it that repairing permissions never takes care of difficult issues like these? That would be awfully helpful...

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Can't seem to get chassis fans running

    - by TK Kocheran
    I've got a ASUS ROG Maximus V Extreme and I'm trying to connect my fans to the chassis fan pins to get them running according to the motherboard. I know for sure that my fans work, as when I test them with my Molex connector, they all happily power on. Here's two of my chassis fans connectors (there are 3-4): Here's the connector that came with either my motherboard or the PSU, can't remember :) I've never seen one of these strange cables before. All I know is that if I plug in the 4-pin mobo connector to either of these fan plugs, fans don't come on and don't show up in the BIOS. (Motherboard has a crazy awesome UEFI BIOS and shows you if it sees the fans.) If I try plugging the 4-pin connection into the mobo and the other side into the PSU, I can't POST. If I plug the PSU connector in without the mobo connector, fans come on. What could I be doing wrong here? Is it a problem with the cable I'm using? Is there something I may have missed in the build?

    Read the article

  • Problem in accessing Windows shared folder on Ubuntu using terminal

    - by vikramtheone
    Hi Guys, Description I have 2 systems with me, one running on Windows(Host) and one on Ubuntu, both on a LAN. On the Windows(Host) I develop software intended for the Linux system and because the Linux system has little external memory, my idea to overcome this is by making the project folder on the Host side a Shared Folder with full access and access it on Ubuntu over the network. To achieve this, I have installed Samba on Ubuntu, when I go to Places -> Network I can see the shared project folder and I simply mount it. A link appears on the desktop. Next, using Nautilus I open the link and I can access the contents of the shared folder. Problem Even though I mount the shared project folder, I don't see it appearing in the /media or the /mnt folder, as a result of this I don't know what path to use to access this folder, from the terminal. For example: When, I mounted my USB stick, as expected, a link for the device appears on the Desktop and I also see a folder in the media folder. So, similarly, a mounted shared folder should have appeared on the /mnt folder, too. Can anyone suggest what I should do now? There are many posts around, but no solid solution for this problem. Help!!! :) Vikram

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • What are the requirements for Windows Remote Assistance over Teredo?

    - by Jens
    I try to get the Windows 7 (or Vista) remote assistance feature to work, without using UPnP on the novices computer. After enabling Teredo on the expert's computer (that is in a corporate network, and therefore has teredo disabled by default), I tried to connect to the novice both using Easy Connect and the invitation file with no success. My triubleshooting included the following (so far). A connection to the novice from my home pc was successful, hinting at a misconfiguration on the experts side. Both computers have a "qualified" connection to the Teredo Server. Both computers have a valid Teredo IP, access to the Global_ PNRP cloud and can resolve names registered with PNRP on the other computer. The expert can resolve the PNRP Id automatically generated with an Easy Connect help request Both computers can ping the other's PNRP name. Both computers can ping the other's Teredo IP Address using ping -6 Now, I am a little stumped. I expected Remote Assistance to work at this point, since my corporate firewall has no Teredo filtering. What could RA cause not to work in this setting? Thanks in advance!

    Read the article

  • Firefox will not remember local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • IPSec VPN using ZyWALL IPSec VPN Client: unable to connect from some providers

    - by Reshi
    I'm trying to configure an IPSec VPN to one company from my home. The company has SANET internet service provider. I was able to create a VPN connection from another company that has the same internet service provider. The problem begins when I'm trying to connect from another ISP like Orange or Telekom. Here is the log from ZyWall: 20120816 10:06:18:359 Default (SA Gateway-P1) SEND phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:375 Default (SA Gateway-P1) RECV phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:390 Default (SA Gateway-P1) SEND phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:718 Default (SA Gateway-P1) RECV phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:734 Default (SA Gateway-P1) SEND phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default (SA Gateway-P1) RECV phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default phase 1 done: initiator id [email protected], responder id 111.112.113.114 20120816 10:06:18:765 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) RECV phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] 20120816 10:06:48:968 Default (SA Gateway-P1) SEND Informational [HASH] [NOTIFY] type DPD_R_U_THERE 20120816 10:06:48:984 Default (SA Gateway-P1) RECV Informational [HASH] [NOTIFY] type DPD_R_U_THERE_ACK ZyWall informs me that the tunnel was opened. But I can't ping or access any computer in the network. My configuration at home: ISP: Orange Optical connection Terminal: GPON OPTICAL NETWORK TERMINAL G-25E Router: TPLink TL-WR941N --> SPI Firewall Enabled --> VPN - IPSEC Passthrough Enabled I was wondering if the problem could not be on ISP side (that he blocks somehow this connection because in SANET ISP it worked fine) or even in my terminal or router. What could I check? Where could be the problem ?

    Read the article

  • Looking for Remote Control that works with everything (even Windows 7 Media Center)

    - by T Reddy
    Using my Google-Fu, it seems that the most basic of things one gets with any DVR is the remote control. Had I known it would be difficult just to get a consumer IR receiver for Windows 7 I may not have bothered to build an HTPC. But too late, I already have the HTPC ready to go (minus the CETON card...) So I'm moving away from TiVo, I hate paying the monthly fees and my box is ancient. I'm looking for these solutions to my HTCP setup...I want to: Switch audio from HDMI to SPDIF via the remote control (i.e., switch from TV to Receiver) (as a side note, the built-in audio on the mobo has software to do this). Pressing the volume button on the remote will always change the TV's volume (or the Receiver's if possible) and NOT the PC's volume. The remote/receiver works well around 25 feet. Bonus if the IR Receiver can work with my existing TiVo remote (or other remotes laying around the house) I read a review of the Bluetooth TiVo remote...it sounds promising...but I'm not sure if it is great for Windows 7 HTPC?

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • Data Store/Volume disconnecting. How to resume copy of VMDK?

    - by Serge
    I'm having an issue with my ESXi 4.1 hosts losing the datastore with FC SAN after a power outage. All 3 hosts disconnect so it's definitely a SAN issue. I've tried to resolve the issue on the SAN side with the SAN software support and Adaptec hardware support. No luck there. So I'm stuck with a SAN that will randomly disconnect the volume. I need to get the virtual machines (VMDK files) from the datastore. The problem is I can only get 5-20% before the data store disconnects. I have backups that are slightly older that I can use to replicate the VMDK differences to. What has not worked so far: Powering up the VMs, will boot up for 5-15 minutes then freeze vCenter migrate or clone of VM, will fail after similar period of time vCenter copy/paste of VMDK. Was able to get one 30GB VMDK and no luck after that. vMware Data Recovery. Fails at low %, can't resume, so next backup starts from begining. Veeam Backup & Recovery. Same as above, no resume function. If I can just find a backup solution that will resume from the failed spot that would solve my issue. Anyone have any ideas that I could try? EDIT 1 The SAN is Open-E DSS 6 running on a Supermicro 24 drive enclosure with 4 port Qlogic FC. Adaptec 52445 RAID card.

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >