Search Results

Search found 7845 results on 314 pages for 'connected'.

Page 240/314 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help!

    Read the article

  • Sharepoint (WSS 3.0) on SBS 2008 broken.

    - by tcv
    I recently ran the Sharepoint Products and Technologies Wizard. I had hoped this would bring up Sharepoint and allow me to access it so I could begin to learn. But it's not working. Here is some data that I hope is relevant. I am doing all my testing on the SBS 2008 server itself. I changed the hostheader in IIS to reflect an external FQDN I plan to deploy. The SBS server is remote and there are no domain-connected workstations. If I browse "localhost" SSL, I can get to the site, albeit with a self-signed cert warning. If I attempt to connect via SSL using either the internal FQDN (.local), the External FQDN (.net) or any other permutation thereof, I am prompted for credentials three times but am not allowed access. My account is a domain admin. The site is inaccessible using port 80 whether using localhost, internal FQDN (.local), and external FQDN (.net) Right now, I suspect my problem is within IIS, but I don't know. My plan to publish the sharepoint site to the web so my partner and I can check documents in/out. Can someone help me get started in current direction?

    Read the article

  • why sendmail resolves to ISP domain?

    - by digital illusion
    I wish to setup a local mail server for debugging purposes using fedora 15 I set up sendmail, but there is a problem. When I'm not connected to the internet, the local mail server delivers correctly (to localhost). And in /var/log/mail I see that I correctly delivered a mail to [email protected]: Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], size=328, class=0, nrcpts=1, msgid=<[email protected]>, relay=adriano@localhost Jun 21 18:24:56 PowersourceII sendmail[6020]: p5LGOuSV006020: from=<[email protected]>, size=506, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=PowersourceII.localdomain [127.0.0.1] Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGOuSV006020 Message accepted for delivery) When I connect, networkmanager fills in /etc/resolv.conf with: domain fastwebnet.it search fastwebnet.it localdomain nameserver 62.101.93.101 nameserver 83.103.25.250 Now sendmail does not work any longer and tries to send messages to my ISP domain, as seen in the log: Jun 21 18:40:02 PowersourceII sendmail[6348]: p5LGe1LV006348: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=30327, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGe10n006352 Message accepted for delivery) Jun 21 18:40:02 PowersourceII sendmail[6354]: p5LGe10n006352: to=<[email protected]>, delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=120651, relay=mx3.fastwebnet.it. [85.18.95.21], dsn=5.1.1, stat=User unknown As you can see, it tries to deliver a mail to [email protected], and fails The setup is working under other ISPs. How can I avoid the fastweb ISP DNS relay? Thank you

    Read the article

  • My VPS host (rosehosting) sold me a domain name, but I can't get it to work

    - by Faisal Vali
    My VPS host (rosehosting) sold me a domain name, but I can’t get it to work. They sent me an email with the following (almost a month ago) DNS Servers (unless you ordered your own DNS servers): ns1.rosehosting.com (216.114.78.148) ns2.rosehosting.com (216.114.78.155) Operating System: Ubuntu 9.04 Domain Name: mytestdomainfv.com Host Name: mytestdomainfv.com IP Address: .... Physical Host Name: Vs####.rosehosting.com When I type in the Physical Host Name or the IP from a remote computer I get connected to my VPS. But when i type 'mytestdomainfv.com' the name is never resolved, and it has been a month now. I thought that they would configure it so that it would, but it seems that they haven't. Does anyone know how I can get 'mytestdomainfv.com' to point to my VPS? I looked at some of the other similar questions, but they talk about forwarding GoDaddy domain names - so I'm not sure if it applies - but then again, it might just be my naivete. Any direction will be greatly appreciated. Thanks! p.s mytestdomainfv.com is not the real domain name

    Read the article

  • Do all routers really must know all routes to every router?

    - by Philipili
    This is my complicated and long question. First let's talk about the context. Network topology: PC A --- RT A --- RT C --- RT B --- PC B (RT C has a WAN NIC connected to "the cloud") With this situation : PC A must send a packet to PC B Default routes direct packets to the cloud We haven't access to RT C's configuration RT C only knows how to join network A, not network B RT A knows about network B RT B knows about network A RT C's routing table: Destination NIC Gateway 0.0.0.0 WAN Cloud Network A LAN A RT A's WAN RT A's routing table: Destination NIC Gateway 0.0.0.0 WAN LAN A Network B WAN LAN A RT B's routing table: Destination NIC Gateway 0.0.0.0 WAN LAN B Network A WAN LAN B I would like to permit PC A and PC B to communicate, but I don't have access to RT C. Networks B and BC are new. Can PC A send a packet to RT B's WAN NIC (which is possible) and "ask RT B to direct the packet to PC B" ? I believe replacing RT B with a VPN server should do the trick, but I would like to know if it is possible to make it without establishing a new connection.

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • Filezilla client unable to get directory listing from Filezilla Server (Windows)

    - by sestocker
    I've set up a self signed certificate in FileZilla server and enabled FTP over SSL/TPS. When I connect from the client FileZilla, I am able to authenticate but cannot get a directory listing: Status: Connecting to MY_SERVER_IP:21... Status: Connection established, waiting for welcome message... Response: 220-FileZilla Server version 0.9.39 beta Response: 220-written by Tim Kosse ([email protected]) Response: 220 Please visit http://sourceforge.net/projects/filezilla/ Command: AUTH TLS Response: 234 Using authentication type TLS Status: Initializing TLS... Status: Verifying certificate... Command: USER MYUSER Status: TLS/SSL connection established. Response: 331 Password required for MYUSER Command: PASS ******** Response: 230 Logged on Command: PBSZ 0 Response: 200 PBSZ=0 Command: PROT P Response: 200 Protection level set to P Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PORT 10,10,25,85,219,172 Response: 200 Port command successful Command: MLSD Response: 150 Opening data channel for directory list. Response: 425 Can't open data connection. Error: Failed to retrieve directory listing I have ports 21 and 50001 through 50005 open on the firewall. We are migrating servers - the 50001 - 50005 is one of the things that helped get FTPS working on the old server. I'm not sure this installation would use the same ports? What else could be the problem?

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • Shared printer hosted by Windows 7 to XP peers [closed]

    - by Alistair Knock
    Possible Duplicate: Add Network Printer drivers in Windows 7/Server 2008 R2? A Canon Pixma ip4600 installed all by itself when connected to a Windows 7-64 bit machine. I then updated it with the add-on Canon provide to give additional functionality. I then wish to share it with a box which is running Windows XP 32-bit. The XP box can see the printer but can't find a driver from the 7 machine, which is fine. I ask the 7 machine to get the x86 drivers, but it can't. I install the 32-bit XP drivers on the XP machine, but unlike previous Canon drivers (which unzip to give an .inf file), they assume a local printer and partially give up. I find the .inf file in the Program files\CanonBJ\... directory. Neither the XP machine, nor the 7 machine when the CanonBJ directory is shared to it, is happy with the .inf file. I attempt to install the 32-bit drivers on the 64-bit 7 machine, which, understandably, fail. Where do we go from here? (I apologise for posting in the first-person, I'm not sure why this was)

    Read the article

  • ntop to analyse bandwidth usage on multiple ASA 5505

    - by dunxd
    I have set up a netflow server at our data centre, which is connected via VPN to ~40 remote offices using Cisco ASA 5505. The aim is to analyse usage data and find out exactly how the remote connections are being used. I followed through http://techowto.files.wordpress.com/2008/09/ntop-guide.pdf to set up ntop and https://supportforums.cisco.com/docs/DOC-6114 to set up the ASAs. I can see from the Plugin Netflow Statistics page that netflow packets from my ASAs are being received - the counter is increasing. However, I am not seeing any breakdown on the Global Traffic Statistic page after switching to the Netflow interface. I'm just seeing a pie chart showing 100% traffic for eth0. The interfaces and documentation are a little hard to follow so I am not sure I have got things configured correctly. When setting up my NetFlow-device.2 I can specify Virtual NetFlow Interface Network Address - the web UI says This value is in the form of a network address and mask on the network where the actual NetFlow probe is located. is this a Network address (e.g. 192.168.0.0/24) or an actual host IP address (192.167.0.1/24)? If that should be a network address, is this the network in which one of my ASAs is or the network in which my ntop server is? If a host IP address, is this the IP address used by eth0 on my ntop server, the IP address of an ASA, or something else? Do I need a separate virtual interface for each ASA I am collecting netflow data from? Any guidance would be greatly welcome.

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • nxclient crashes when trying to open a terminal from a remote client through "ssh -Y"

    - by user167328
    I support around 150 linux machines. I have 2 virtual machines on an ESXi server which I access via nxmachine v3 from a windows 7 box. These machines run CentOS5 with KDE and Lubuntu12.04.1 and they are the admin GUIs from which I support the 150 machines. The linux machines which I manage are redhat4/5, CentOS5 and ubuntu 10 and 12. Normally I contact the machines via ssh -Y. Today I did an ssh -Y to a remote machine which is running Ubuntu 12.10 and ssh 6.0p1. Then I tried to open an lxterminal on the remote machine which should display on my KDE desktop. This immediately and reproducably crashed my nxclient session. I tried again from my lubuntu system with the same effect. I have not observed the phenomenon from other machines yet. The message log on my KDE host shows: Unexpected termination of nxagent because of signal: 11 Logger::log nxnode 3920 Googling for this revealed no usable answer. Does anybody have a clue what is going on here or can give a hint how to solve the issue? Add On: I asked the user at the remote machine to export his DISPLAY to my host and open an lxterminal. This worked without problems i. e. the nxclient did not crash. Then the user tried to send me xeyes and this also killed the nxclient with the same error message found in the message log as above. This makes me suspect that the problem is not solely connected to ssh but maybe to some library stuff.

    Read the article

  • Western Digitial My Book: Can't access the data on the drive

    - by Bryan Denny
    My girlfriend has this external hard drive by Western Digital called a My Book. When the external drive is connected, it does not show it as an accessible disk drive on the computer. However, it shows up fine in Device Manager: I can also see it in Disk Management, but the volume is not mapped to a drive letter, nor can I change the drive letter: It only gives me access to Delete Volume: I would rather not lose the data on the drive if possible. What can I do from here to get to the data? Things I've tried/know: Uninstall drivers and re-install them Device does the same thing when attach to either her Win7 laptop or my Win8 laptop I don't think there's an issue with the HDD itself. No clicking noises, etc. I ran Western Digital Data LifeGuard Diangostics (DLGDIAG) and the SMART Status was a "PASS", all of the SMART Disk Information looked fine. I haven't had the time to run the diag tests yet but I do not believe it's a mechanical issue. The hard drive is inside of an enclosure, I have not attempted to pry the drive out yet. How can I get Windows to properly detect this drive?

    Read the article

  • YUM error. Is this a cert error

    - by Julia Roberts
    Nov 13 13:38:57 host abrt: detected unhandled Python exception in '/usr/bin/yum' Nov 13 13:38:57 host abrtd: New client connected Nov 13 13:38:57 host abrt-server[3508]: Saved Python crash dump of pid 3151 to /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151 Nov 13 13:38:57 host abrtd: Directory 'pyhook-2012-11-13-13:38:57-3151' creation detected Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-former Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-release Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-rhx Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Nov 13 13:38:57 host abrtd: Package 'yum' isn't signed with proper key Nov 13 13:38:57 host abrtd: 'post-create' on '/var/spool/abrt/pyhook-2012-11-13-13:38:57-3151' exited with 1 Nov 13 13:38:57 host abrtd: Corrupted or bad directory /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151, deleting There is also nothing in the crash dump file. Ideas? yum update Loaded plugins: fastestmirror, rhnplugin, security An error has occurred: Internal Server Error See /var/log/up2date for more information Is yum broken

    Read the article

  • SATA Blu-Ray drive not detected via USB adapter

    - by LTR
    I'm using an SATA blu-ray drive (Lite-On IHOS104), connected via a USB adapter to my Windows 7 (32-bit) notebook. The first time I plugged the adapter in, it worked perfectly. The drive appeared in Explorer, and my player software was able to play an encrypted Blu-Ray disc just fine. After the first reboot, it never worked again. Now it shows up as "removable media" with a size of 0 bytes, irregardless of whether a disk is inserted or not. The drive does not show up under the green "eject USB device" icon in the taskbar. The adapter is powered using a separate power adapter. I have tried: disconnecting all other USB devices testing with a DVD instead of a Blu-Ray disc letting Windows search for updated drivers for the drive - there are none using the same drive and adapter on another Win7 (but x64) computer. They work perfectly. using other USB ports on the same notebook - tried them all. using a different SATA/USB-adapter What else can I do to diagnose this problem?

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • Cannot connect to Solaris Server when Oracle GoldenGate process uses the port

    - by Abdallah Ghrb
    I'm trying to test the Oracle goldengate to replicate data between two databases on the same server. So I installed two databases and two goldengate homes in the same machine. The goldengate processes are started from each home and they are responsible for the replication : Process from home 1 configured on port 7809 & process for home 2 configured on port 7810. For a successful replication, processes started from goldengate home 1 should communicate with processes started from goldengate home 2. But for some reasons, this is not happening. The goldengate log file has the following error : OGG-01223 Oracle GoldenGate Capture for Oracle, exthrr.prm: TCP/IP error 131 (Connection reset by peer). "Googling" for this error, it said that the connection occurred but the host terminated it. Tried to telnet the machine with the used port and it gave the following error: bash-3.00$ telnet 10.10.3.124 7810 Trying 10.10.3.124... Connected to 10.10.3.124. Escape character is '^]'. Connection to 10.10.3.124 closed by foreign host. Here the communication occurs but for only around 3 seconds then it is closed by the host, which is the same explanation of the above error in the goldengate log file. I tried to change the port but still the same error. The problem is happening only when the goldengate process is using the port. When other process is using the same port, I can telnet successfully.

    Read the article

  • Is current SATA 6 gb/s equipment simply unreliable?

    - by korkman
    I have a 45-disk array of Seagate Barracuda 3 TB ST3000DM001 (yes these are desktop drives I'm aware of that) in a Supermicro sc847 JBOD, connected via LSI 9285. I have found a solution for the problem description below by reducing speed via MegaCli -PhySetLinkSpeed -phy0 2 -a0; for i in $(seq 48); do MegaCli -PhySetLinkSpeed -phy${i} 2 -a0; done and rebooting. The question remains: Is this typical for current 6 gb/s equipment? Is this the sad state of SATA storage? Or is some of my equipment (the sff-8088 cables come to mind) bad? The Problem was: Synchronizing HW RAID-6, disks kept offlining. Fetching SMART values reveiled that those which offlined did not increase powered-on hours anymore. That is, their firmware (CC4C) seems to crash. Digging into the matter by switching to Software RAID-6, with the disks passed-through, I got tons of kernel messages scattered across all disks, with 6 gb/s: sd 0:0:9:0: [sdb] Sense Key : No Sense [current] Info fld=0x0 sd 0:0:9:0: [sdb] Add. Sense: No additional sense information And finally, when a disk offlines: megasas: [ 5]waiting for 160 commands to complete ... megasas: [35]waiting for 159 commands to complete ... megasas: [155]waiting for 156 commands to complete ... megaraid_sas: pending commands remain after waiting, will reset adapter. Ugly controller reset here, then minutes later: megaraid_sas: Reset successful. sd 0:0:28:0: Device offlined - not ready after error recovery ... sd 0:0:28:0: [sdu] Unhandled error code sd 0:0:28:0: [sdu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK sd 0:0:28:0: [sdu] CDB: Read(10): 28 00 23 21 2f 40 00 00 70 00 sd 0:0:28:0: [sdu] killing request Reduced speed to 3 gb/s like written above, all problems vanished.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • How to prevent samba from holding a file lock after a client disconnects?

    - by Jean-Francois Chevrette
    Here I have a Samba server (Debian 5.0) thats is configured to host Windows XP profiles. Clients connects to this server and work on their profiles directly on the samba share (the profile is not copied locally). Every now and then, a client may not shutdown properly and thus Windows does not free the file locks. When looking at the samba locking table, we can see that many files are still locked even though the client is not connected anymore. In our case, this seems to occur with lockfiles created by Mozilla Thunderbird and Firefox. Here's an example of the samba locking table: # smbstatus -L | grep DENY_ALL | head -n5 Pid Uid DenyMode Access R/W Oplock SharePath Name Time -------------------------------------------------------------------------------------------------- 15494 10345 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user1 app.profile/user1.thunderbird/parent.lock Mon Nov 22 07:12:45 2010 18040 10454 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user2 app.profile/user2.thunderbird/parent.lock Mon Nov 22 11:20:45 2010 26466 10056 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user3 app.profile/user3.firefox/parent.lock Mon Nov 22 08:48:23 2010 We can see that the files were opened by Windows and imposed a DENY_ALL lock. Now when a client reconnects to this share and tries to open those files, samba says that they are locked and denies access. Is there any way to work around this situation or am I missing something? Edit: We would like to avoid disabling file locks on the samba server because there are good reasons to have those enabled.

    Read the article

  • Network Misconfiguration when adding first host to new vSphere cluster

    - by dunxd
    I am building a new vSphere cluster from scratch. I have installed ESXi on the first host, and built a vCenter server on a VM residing on that host (storage is on the local hard drive, although we have iSCSI targets which I can reach from the host). The cluster is configured for HA. When I try and add the host to the cluster, I get an error at the point where HA is configured - Cannot complete the . I have stripped the network configuration of the host down to the most basic - a single NIC attached to a single vSwitch - this is running the VMKernel Port on VLAN 8 - that is our Management VLAN. The vCenter server will have a network address on this VLAN, so I also set the initial Virtual Machine Port Group to this VLAN, and connected the vCenter server NIC to this port group. I understand I can't connect the vCenter server to the VMkernel port group, but shouldn't I be able to connect the vCenter server to a Port Group in the same VLAN? If not, do I need to create a VLAN specifically for VMKernel Port Group? I plan to set up another port group for vMotion with a dedicated and isolated VLAN (i.e. VLAN isn't routed) so this wouldn't allow vCenter to communicate. Does anyone have any suggestions, or other ideas for what might be causing the problem. I've read through the documentation, but it isn't giving me any pointers, and the error message isn't helping me beyond telling me something is wrong with my network config.

    Read the article

  • Keep Windows Installer from using largest drive for temporary files

    - by stefan.at.wpf
    By default Windows Installer uses the largest drive for temporary storage, no matter if that's needed (meaning there would also be enough space on the system drive). Taken from http://msdn.microsoft.com/en-us/library/aa371372%28VS.85%29.aspx: During an administrative installation the installer sets ROOTDRIVE to the first connected network drive it finds that can be written to. If it is not an administrative installation, or if the installer can find no network drives, the installer sets ROOTDRIVE to the local drive that can be written to having the most free space. Now my system drive is an SSD, my largest drive is a RAID, that spins down when it's not used. Remember the SSD as system drive? Everything is silent now! Until I install something and Windows Installer wakes up my RAID again just to put a small .tmp file on it... How can I prevent Windows Installer from using the largest drive as temporary storage? Can I maybe set some access rights to disallow the Windows Installer to write on my RAID drive? Any other ideas? Thank you!

    Read the article

  • SSHing thru an HTTP proxy

    - by Siler
    Typical scenario: I'm trying to SSH thru a corporate HTTP proxy to a remote machine using corkscrew, and I get: ssh_exchange_identification: Connection closed by remote host Obviously, there's a lot of reasons this might be happening - the proxy might not allow this, the remote box might not be running sshd, etc. So, I tried to tunnel manually via telnet: $ telnet proxy.evilcorporation.com 82 Trying XX.XX.XX.XX... Connected to proxy.evilcorporation.com. Escape character is '^]'. CONNECT myremotehost.com:22 HTTP/1.1 HTTP/1.1 200 Connection established So, unless I'm mistaken... it looks like the connection is working. So, why then, doesn't it work via corkscrew? ssh -vvv [email protected] -p 22 -o "ProxyCommand corkscrew proxy.evilcorporation.com 82 myremotehost.com 22" OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Executing proxy command: exec corkscrew proxy.evilcorporation.com 82 myremotehost.com 22 debug1: permanently_set_uid: 0/0 debug1: permanently_drop_suid: 0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: identity file /root/.ssh/id_ed25519 type -1 debug1: identity file /root/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Measure Total Bandwidth for Billing

    - by TonyZ
    I am setting up a new network which customers will host their applications on. It needs to be able to scale out to a few hundred servers and each server will have several VMs on it. Right now in my test environment, after the telco router, we are using a Linux router/firewall which is then connected to a Layer 2 switch. Could be a layer 3 in the future. I need to track total bandwidth per VM for each machine, and I need to do it in a way that it is not part of the VM. Each VM will have a private class ip address which is Natted by the gateway, or we may eventually run more than firewall/reverse proxy off a layer 3 switch. So my thinking is that I can do it off of a promiscuous port on the switches, or at the gateway firewall. I would like to have an out of the box solution, preferably open source. Does anyone have suggestions on the easiest way to set this up, and the easiest tool to use. I have looked at the web sites for Nagios, Zenoss, Zabbix, ntops on the firewall, etc. It is hard to ascertain just from the web sites if they do exactly this or not. Obviously, performance is also somewhat key here. Anything running on the gateway should not drag it down doing traffic accounting. Thanks for any thoughts. Tony Zakula

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >