Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 654/889 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    I posted this question on SuperUser but it's hardly getting any views, so I thought I'd ask here as well. In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: [\033k]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title? Thanks!

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: \[\033k\]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title?

    Read the article

  • Limited user requires admin rights for plug and play printer?

    - by Kalamane
    I have a small fleet of laptops that aren't part of a domain running Windows XP Pro SP3 as limited users. They are used for printing different shipping documents. I have a script that runs when they start up that uses devcon and prntmngr to detect and install/configure the currently connected usb printers. This lets us deploy the laptops to any printing station with a USB printer and have the printer 'just work' for the end user employee. I've taken the original clone image and have added functionality to it. Since then I've discovered a bit of an issue with using HP LaserJet P1606dn printers. They have started asking for admin rights on setup. This is with and without the script running. Previously they would automatically install because I had installed WHQL plug and play drivers for them. I thought it might have to do with the HP Smart Install Utility but it happens when that is disabled. I don't have a good point to roll back to before this started happening because this was an issue on the image I took initially to start this upgrade. What could be causing this?

    Read the article

  • Cannot get at data in my NAS

    - by Ben
    I've got a bit of an issue that I'm hoping you can help me with. I have an Iomega ix4 as my NAS. This runs Linux and each drive in the box has 2 partitions: one for the OS and RAID info, and the second for the actual data. I had it configured as RAID5. Recently one of the drives failed. At this point all of the data was available, it was just reporting a failed drive. I had a drive of the same capacity (although not the exact same spec) which I swapped in place of the failed drive. It recognised it, and started to rebuild the data protection. So far so good ... or so I thought. The next day, after data protection had finished reconstructing, the NAS was telling me that 4 new drives had been added, and wanted confirmation to overwrite the data. Obviously I declined to do this. I swapped the failed drive back in again, in the hope that it would return to its previous state of the data being accessible, but one failed disk. However it didn't - it still tells me that the NAS has 4 new drives in it. I am hopeful that the actual data is untouched, so what I need to do is get it to rebuild the RAID without touching the data on the disks. I have ssh access, and have run stuff like mdadm --examine to see what I can find. The mdadm.conf file has no entry in the "definitions of existing MD arrays" section. I have not run any actual rebuilding commands as yet, because this is entering an area which I am out of my depth in. Please can someone advise the best way of getting my data? Thanks.

    Read the article

  • Getting dwl-g122 to work on ubuntu

    - by User1
    I have a USB WiFi adapter, D-Link dwl-g122. I'm running Ubuntu 10.4. My laptop has a built-in wireless card that is connecting fine to the router. I plug in the usb and it never really connects. Here are some details: iwconfig wlan1 IEEE 802.11bg ESSID:"\x0B\xE1..." Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on lshw -c network: *-network:1 description: Wireless interface physical id: 3 logical name: wlan1 serial: 00:13:46:8b:xx:xx capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg dmesg [ 1096.814176] wlan1: direct probe to AP xxx (try 1) [ 1096.820960] wlan1: direct probe responded [ 1096.820969] wlan1: authenticate with AP xxx (try 1) [ 1096.823790] wlan1: authenticated [ 1096.823869] wlan1: associate with AP xxx (try 1) [ 1096.827667] wlan1: RX AssocResp from xxx (capab=0x411 status=0 aid=1) [ 1096.827674] wlan1: associated [ 1142.590912] wlan1: deauthenticating from xxx by local choice (reason=3) lsmod|rt2 rt2500usb 19643 0 rt2x00usb 11260 1 rt2500usb rt2x00lib 32133 2 rt2500usb,rt2x00usb mac80211 238896 3 ath5k,rt2x00usb,rt2x00lib cfg80211 148725 4 ath5k,ath,rt2x00lib,mac80211 led_class 3764 3 ath5k,rt2x00lib,sdhci It looks like the driver loads but it doesn't feel like connecting. The behavior is identical even if I blacklist the other wifi card (using an ath5k driver). It's almost like it is using the wrong password or something. Does anyone know what is happening? Is anyone using Ubuntu successful?

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • Logging Remote Server Access via Remote Desktop

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • New to server admin, Diagnosing Memory and CPU issues on DV

    - by G Thompson
    Sorry for my ignorance and lack of knowledge. I'm a PHP/Front-end developer just now venturing into very minor server management/diagnostics. I have a Media Temple DV account. I have 2 sites that run a PHP script through a subscription service to an API. Basically API hits site with said script. Script runs, gathers data from api, saves data to SQL database. I noticed that these sites seemed to causing memory overages on my server (not sure why). So I temporarily disabled them. The memory overage alerts stopped but my CPU still sits really high, like at 115% and above. I'm trying to diagnos this with tutorials and resources but just can't seem to find a solution. I'll attach screenshots(screenshots are without the PHP scripts I assume are responsible for the memory issues) I'm assuming are important to the diagnosis, but if anyone can point me in the right direction to start A. figuring out if and why the PHP script may be causing memory overages and B. Why my CPU is always over 100%. Thanks guys! Links to screen shots... can't post with low points. http://i.stack.imgur.com/A64k4.png http://i.stack.imgur.com/qm1rV.png

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • How to setup apache to catch a proxy_pass from nginx?

    - by Paté
    I have a working apache vhost such as <VirtualHost localhost:10006> DocumentRoot "/home/pate/***/git/kohana_site/public/site/" </VirtualHost> <VirtualHost *:10006> ServerName api.* DocumentRoot "/home/pate/***/git/kohana_site/public/api/" LogLevel debug </VirtualHost> If i point to localhost:10006 I get my website and api.localhost:10006 I get my api. Then I have haproxy setup on top of that, that runs on port 10010 and both localhost:10010 and api.localhost:10010 have the expected behaviour. Now I have nginx setup on port 80 with this configuration. server { listen 10000; server_name api.*; location / { proxy_pass http://legacy_server; } } server { listen 10000 default; server_name _; location /nginx_status { stub_status on; access_log off; } # images are accessed via the CDN over HTTP (not https) location /n/image { proxy_pass http://image_caching_server; } location / { return 301 https://$host:10014$request_uri; } } upstream legacy_server { server localhost:10010 fail_timeout=0; } the problem is that apache does not recognize the vhost properly and redirects api.localhost to the website instead of the api. I tried playing with set_proxy_header Host $host but it doesn't seem to do anything.

    Read the article

  • crontab not running on VirtualBox unless I'm logged in

    - by Mike
    I am running Ubuntu Server 9.04 in VirtualBox on my work PC as a development environment. I have some scripts that I've put in my user's crontab that run throughout the day while I'm SSHed into the VM. Last night, I closed PuTTy and all of my other running applications (except for VirtualBox and the VM) and went home. I came back this morning to discover that my cron jobs didn't run at all, yet when I SSHed into the VM, the next scheduled job ran. I set the schedule to 5min to test, disconnected again, and the jobs stopped running on schedule. They seem to only run if I'm logged in to the machine. Obviously, I want them to run on schedule even if I'm not logged in to the VM, otherwise there's no point. Is there something I've failed to configure correctly? New Information: There are now 3 entries in /var/log/cron.log saying the following "Mount of private directory return code [256]"... the entries correspond to when the cron job is supposed to run. I thought they are supposed to run as my userid? Why would my own userid be unable to run a script in my home directory?

    Read the article

  • How to install (old) packages for Ubuntu 9.04?

    - by wchrisjohnson
    Based on some excellent feedback by Mark here (http://serverfault.com/questions/285598/should-i-clone-a-physical-server-to-create-a-vm-for-a-staging-server), today I was able to use the vmware converter to clone my production server for a staging server. However the nic won't come up no matter what I do. I attempted to inistall vmware tools, as I suspect that the fact that it is not installed might prevent the nic from working. (I have the nic set as a vmxnet3 card in the vm settings). The install failed because there were several dependencies missing as well as the Linux headers. Given that Ubuntu 9.04 has been EOL'd, the packages I need to install to get the vmware tools to install are no longer available. I doubt the ubuntu 9.04 install CD has the packages on it. What are my options? I'd rather not upgrade the version of Ubuntu yet, as the point of the vm right now is to maintain parity with the production server. Might I have better luck resetting the driver to use vmxnet2 instead of the vmxnet3? Thanks in advance! Chris

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • Ubuntu 11.04 fresh install - "Input signal out of range" or "Mode not supported..."

    - by Dennis
    I recently installed Ubuntu 11.04 using a CD .iso. Installation went fine. Upon completion I rebooted and after a second or two I got a black screen with the message "Input signal out of range". And there it sits... Read a few things about how this could be related to screen resolution, refresh rate, etc. For the heck of it I tried a different monitor. The result is the same but the message provides some clues - "Mode not supported - H:92.7kHz, V:58.3Hz" (the latter is Hz; not kHz). So my thought is that I should probably be able to use the 11.04 install disc to "Try out Ubuntu", find and edit some file that was created by the install with the correct values. Problem is, I am not too sure what I am supposed to edit. Looked at the xorg.conf file but this is so minimal at this point I am not sure it is where I want to go. By the way, the monitor is an I-Inc ix191a. Anyone have any ideas on how to get around this?

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • maximum size filesystem on my test .... approach?

    - by jocco
    Hello all I'm new at the site, and I have a question. I got this question at a test and really like to know the correct approach to solving this problem? Here is the question. In an indexed filesystem the first indexblock (inode) has 12 direct pointers and 1 pointer to an indirect indexblock. The filesystem is implemented on a disk with a diskblock-size of 1024 bytes. All pointers are 32 bit. Question: what is the maximum filesize (Kilobytes) of this filesystem? If it's possible not an just an answer but an explanation. edit: It was a multiple choice btw with 4 answers a. 13 K b. 268 K c. 524 K d. 1036 K As for my approach I only got as far as to know that 1 pointer is 32 bit Also I found something else here on the site which seems very usefull. http://stackoverflow.com/questions/2755006/understanding-the-concept-of-inodes Ok i got this far There are 12 blocks and each block is 1024 bytes. 1024 * 12 = 12288 bytes or 12 KB directly accessible. Please correct me if I'm wrong. Each pointer is 32 Bit = 4Byte And to be honest at this point I'm starting to get confused especially since my answer is way over any of my multiple choice answers.

    Read the article

  • Batch-Remove Specific Text from Photo Descriptions

    - by David
    I've recently upgraded to iPhoto '11 (couldn't resist the pricing on the new app store) and as I'm adding more meta-data to my library and generally organizing things (places, faces, etc... I hadn't upgraded since '08) I've noticed something odd in my photos. Every photo in my library has a description (though many are short), but it would appear that somehow the description of one of the photos has been appended to many. I don't know if maybe I accidentally screwed up a batch change at some point in the past, or if the library upgrade somehow messed up, or what else may have happened. But what I need to do is fix these somehow. Now, manually editing is something of a daunting task. Within a library of 21,248 photos, 18,858 of them have this additional text. The one thing I do have going for me is that it's a specific string. If there's a way to "remove this string from everywhere in the library without removing the rest of any given description" then that would be perfect. Is there anything I can do like this? Maybe even manually editing a library file in a text editor? (Would that break anything else in iPhoto if its library was edited outside of the application, even while it's not running?) Does anybody have any ideas?

    Read the article

  • Debian: Unable to mount a second drive as a subdirectory inside of another partition.

    - by jkndrkn
    Hello. I have the following /etc/fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md5 /home ext3 defaults 0 2 /dev/md3 /opt ext3 defaults 0 2 /dev/md6 /tmp ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md4 /var ext3 defaults 0 2 /dev/md7 none swap sw 0 0 /dev/sdc /home/httpd ext3 defaults 0 2 /dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdc1 /mnt/usb/backup-1 auto defaults 0 0 I am unable to get /dev/sdc/ to mount at /home/httpd/ on reboot. The /home/httpd/ directory exists. Mounting via mount -t ext3 /dev/sdc /home/httpd works just fine. Mounting via mount -a generates the following error message: mount: you must specify the filesystem type This is, incidentally, the same message that I see while booting. The error message goes away if I comment out the line in fstab starting with /dev/sdc.

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

  • OpenVZ container is running but does not show in vzlist nor can I find the private/conf files for the container

    - by Kakeakeai
    I was creating a new OpenVZ container on one of our VPS Nodes while the power went out for that machine. After bringing the machine back online I could no longer access the container CTID=101. I could not destroy it using "vzctl destroy 101", I can not enter or control it, and "vzlist -a" does NOT display any containers at all (this was a fresh node and the first container was being created). I decided to create a new container at this point assuming that the old container just was not saved for some reason. However when I go to add the ip/host to the new container I get a warning that the IP is already in use. After doing a ping to the IP I realized there was a machine on that IP. I SSH into the machine and discover it is the OLD container that some how is orphaned. I can not find it on the filesystem, I can not find it using VZ commands, and It is set to start on Node boot so it is impossible to shutdown (even ssh in and typing the "shutdown now" command just reboots the container not shut it down). Is this a flaw in OpenVZ or am I missing something? I have all the outputs and logs if needed. Thank you all so much in advance.

    Read the article

  • iptables logging not working?

    - by vps_newcomer
    OS: Ubuntu 10.04 Logging daemon: rsyslog For some reason i'm not getting any iptables logs, even thought i don't look through them very often i'd still like to get it working for the sake of it working XD Here is my /etc/ryslog.d/iptables.conf :msg, contains, "[IPTABLES]" -/var/log/iptables.log & ~ My iptables logging prefix is "[IPTABLES]" followed by whatever else (example [IPTABLES] Denied xyz) the /var/log/iptables.log file is being created, however its not getting any entries. I can see the logging entries in dmesg but not in syslog or messages. Whats going on? EDIT: My iptables logging rules: # logging limit LoggingLimit=5/min LoggingPrefix=IPTABLES # Logging chain iptables -N LOG_REJECT iptables -A LOG_REJECT -j LOG # join INPUT to LOG_REJECT iptables -A INPUT -j LOG_REJECT # logging iptables -A LOG_REJECT -p tcp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied TCP: " #--log-level 7 iptables -A LOG_REJECT -p udp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied UDP: " #--log-level 7 iptables -A LOG_REJECT -p icmp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied ICMP: " #--log-level 7 Update: I found a thread that has the same symptoms as i do, apparently is a kernel bug. I am using a VPS so could anyone point me on how to upgrade my kernel or apply a workaround? I couldn't find a 2.6.34 kernel listed in apt-cache. Thread: http://www.linode.com/forums/viewtopic.php?t=5533

    Read the article

  • How to make Windows command prompt treat single quote as though it is a double quote?

    - by mark
    My scenario is simple - I am copying script samples from the Mercurial online book (at http://hGBook.red-bean.com) and pasting them in a Windows command prompt. The problem is that the samples in the book use single quoted strings. When a single quoted string is passed on the Windows command prompt, the latter does not recognize that everything between the single quotes belongs to one string. For example, the following command: hg commit -m 'Initial commit' cannot be pasted as is in a command prompt, because the latter treats 'Initial commit' as two strings - 'Initial and commit'. I have to edit the command after paste and it is annoying. Is it possible to instruct the Windows command prompt to treat single quotes similarly to the double one? EDIT Following the reply by JdeBP I have done a little research. Here is the summary: Mercurial entry point looks like so (it is a python program): def run(): "run the command in sys.argv" sys.exit(dispatch(request(sys.argv[1:]))) So, I have created a tiny python program to mimic the command line processing used by mercurial: import sys print sys.argv[1:] Here is the Unix console log: [hg@Quake ~]$ python 1.py "1 2 3" ['1 2 3'] [hg@Quake ~]$ python 1.py '1 2 3' ['1 2 3'] [hg@Quake ~]$ python 1.py 1 2 3 ['1', '2', '3'] [hg@Quake ~]$ And here is the respective Windows console log: C:\Workpython 1.py "1 2 3" ['1 2 3'] C:\Workpython 1.py '1 2 3' ["'1", '2', "3'"] C:\Workpython 1.py 1 2 3 ['1', '2', '3'] C:\Work One can clearly see that Windows does not treat single quotes as double quotes. And this is the essence of my question.

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >