Search Results

Search found 14951 results on 599 pages for 'connect by'.

Page 503/599 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Can't see more than the first few lines in an SSH connection

    - by hello
    I need some help for SSH buffer size. I have a vista machine at home and i have installed "Free SSHD" on it. I also have Dynamic DNS setup to access some of my home lab equipment which are connected to this vista machine. From my work machine which is an XP machine I connect to my home machine using Putty. Everything up to this point is working fine without any problem. The issue is I can't see more lines than the first few lines of the output. I press the space bar to get more output off the screen and the output scrolls up and it gets lost as the more output gets displayed on the screen. The Putty client i am using on my work machine has been setup with enough buffer size but the output still only displays few lines and as it moves up, the buffer gets empty automatically. I have searched the entire web and haven’t found any proper solution any where. Can someone please help here? Thanks.

    Read the article

  • Jailkit not locking down SFTP, working for SSH

    - by doublesharp
    I installed jailkit on my CentOS 5.8 server, and configured it according to the online guides that I found. These are the commands that were executed as root: mkdir /var/jail jk_init -j /var/jail extshellplusnet jk_init -j /var/jail sftp adduser testuser; passwd testuser jk_jailuser -j /var/jail testuser I then edited /var/jail/etc/passwd to change the login shell for testuser to be /bin/bash to give them access to a full bash shell via SSH. Next I edited /var/jail/etc/jailkit/jk_lsh.ini to look like the following (not sure if this is correct) [testuser] paths= /usr/bin, /usr/lib/ executables= /usr/bin/scp, /usr/lib/openssh/sftp-server, /usr/bin/sftp The testuser is able to connect via SSH and is limited to only view the chroot jail directory, and is also able to log in via SFTP, however the entire file system is visible and can be traversed. SSH Output: > ssh testuser@server Password: Last login: Sat Oct 20 03:26:19 2012 from x.x.x.x bash-3.2$ pwd /home/testuser SFTP Output: > sftp testuser@server Password: Connected to server. sftp> pwd Remote working directory: /var/jail/home/testuser What can be done to lock down SFTP access to the jail? FWIW, I mostly used this as a guide: http://digitalpatch.blogspot.com.ar/2010/03/openssh-daemon-hardening-part-3-setup.html

    Read the article

  • Why can't a PC with 2 network cards be accessed by hostname?

    - by lewis
    I set up PC with 2 network cards, connected to the same LAN. I can connect to this PC (e.g. by remote desktop) only via ip-addresses. Accessing by hostname does not work. Why is this the case? UPDATE: Full environment 1. PC with 2 hardware network adapters. 2. On this PC installed VMWare Workstation. Created 3 VM's, networked by "bridged" network setting in VMWare. 3. In LAN all ip-addresses given from DHCP. 4. Win2k8 on all hosts (both physical and vitrual). As result: 1. PC has 2 ip-address (e.g. 192.168.1.71 and 192.168.1.72). PC available in LAN by ip-addreses, but not avail by hostname. 2. VM's has own ip-addr on each (e.g. 192.168.1.73, *74, *75 etc). They are available from LAN by their ip's, BUT not by their hostnames. How can I access to PC and to VM's by hostname?

    Read the article

  • System With Two Network Adapters

    - by Synetech inc.
    Hi, My system has a NIC (Marvell Yukon) built-into the motherboard, but I also have a D-Link (RealTek) card. I figure that using the D-Link and disabling the Marvell makes the most sense, though I'm wondering if maybe the built-in one has better throughput (not that my Internet connection is so fast). Also, I'm wondering about the merits of using both at the same time. My router has four ports and I have experimented with enabling and plugging both NICs into the router. I was able to connect to the Internet, but the pattern of usage seemed irregular (which adapter was chosen for the transfer and any given point). I also considered bridging the two, but am having difficulty in finding out what exactly creating network bridge does in the context of the Windows Network Connections window. I am familiar with the concept of connecting networks, so it seems to me that birding two connections on the same segment is pointless at best (and can cause problems like loops?) Does anyone have any tips on what to do if a system has more than one NIC and any clarification on the bridge option? Thanks a lot.

    Read the article

  • Can I charge USB devices from a powered hub that isn't connected to a PC?

    - by Anodyne
    This will probably sound familiar to most of you... In my home, we have a whole bunch of devices that can be charged via USB (two iPhones, a BlackBerry, an iPod Touch, etc ad nauseam). We also have a bunch of USB chargers, each of which has a single USB port on it. I'd like to have something permanently connected to AC power with at least 4 USB ports on it, so we can just plug devices in and don't need to go looking for a free outlet. So here's the question: if I buy a powered USB hub, will that do the job even if I don't connect it to a PC? Ideally if you have a hub that you can personally verify will be suitable, let me know the manufacturer and model :-) Thanks in advance! EDIT: The solution I eventually went for was this: Kensington 4-Port USB Charger for Mobile Devices (Europe) There's also a US version here: Kensington 4-Port USB Charger for Mobile Devices (USA) It arrived yesterday, so I used it to charge the following devices, all at the same time, overnight last night: 32GB iPhone 3GS 16GB iPhone 3G First-generation iPod Touch Kensington Portable Power Pack for Mobile Devices I can't say anything about the charging speed (as I left it overnight) but all devices were fully charged this morning.

    Read the article

  • SSH connection times out unless I tunnel in from a different server-

    - by rm-vanda
    OK, so this just started last week - Whenever we try to connect to our server via ssh (we use sftp, as well) - The connection times out. However, when you ssh to any other server and then ssh into the machine - it works flawlessly. Now, the mindblowing thing is that sometimes the ssh connection will succeed. Moments ago, I tried it from another machine, and then my own, and it worked - only to time out the next go around. Last week, simply restarting the ssh daemon worked, but this week, no such luck. I even went in and changed: /etc/hosts.allow ALL : ALL and /etc/hosts.deny is blank. The firewall config hasn't changed - but I even disabled the firewall to see if that would work - It did, for a moment - before cutting off, again. (ufw is set to "ALLOW" not "LIMIT") When I try SSH'ing in from my phone -- it works, fine -- So, it seems the problem is with our ISP/router/gateway - However, I see no log in the router/gateway that says its blocking our connections - And that wouldn't explain why we can SSH into any other server -- except for this one - from our network --- I truly appreciate any insight that anyone may have on this matter -

    Read the article

  • Trouble setting up incoming VPN in Microsoft SBS 2008 through a Cisco ASA 5505 appliance

    - by Nils
    I have replaced an aging firewall (custom setup using Linux) with a Cisco ASA 5505 appliance for our network. It's a very simple setup with around 10 workstations and a single Small Business Server 2008. Setting up incoming ports for SMTP, HTTPS, remote desktop etc. to the SBS went fine - they are working like they should. However, I have not succeeded in allowing incoming VPN connections. The clients trying to connect (running Windows 7) are stuck with the "Verifying username and password..." dialog before getting an error message 30 seconds later. We have a single external, static IP, so I cannot set up the VPN connection on another IP address. I have forwarded TCP port 1723 the same way as I did for SMTP and the others, by adding a static NAT route translating traffic from the SBS server on port 1723 to the outside interface. In addition, I set up an access rule allowing all GRE packets (src any, dst any). I have figured that I must somehow forward incoming GRE packets to the SBS server, but this is where I am stuck. I am using ADSM to configure the 5505 (not console). Any help is very much appreciated!

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • Can ZFS ACL's be used over NFSv3 on host without /etc/group?

    - by Sandra
    Question at the bottom. Background My server setup is shown below, where I have an LDAP host which have a group called group1 that contains user1, user2. The NAS is FreeBSD 8.3 with ZFS with one zpool and a volume. serv1 gets /etc/passwd and /etc/group from the LDAP host. serv2 gets /etc/passwd from the LDAP host and /etc/group is local and read only. Hence it doesn't not know anything about which groups the LDAP have. Both servers connect to the NAS with NFS 3. What I would like to achieve I would like to be able to create/modify groups in LDAP to allow/deny users read/write access to NFS 3 shared directories on the NAS. Example: group1 should have read/write to /zfs/vol1/project1 and nothing more. Question The problem is that serv2 doesn't have a LDAP controlled /etc/group file. So the only way I can think of to solve this is to use ZFS permissions with inheritance, but I can't figure out how and what the permissions I shall set. Does someone know if this can be solved at all, and if so, any suggestions? +----------------------+ | LDAP | | group1: user1, user2 | +----------------------+ | | | |ldap |ldap |ldap | v | | +-----------+ | | | NAS | | | | /zfs/vol1 | | | +-----------+ | | ^ ^ | | |nfs3 |nfs3| v | | v +-----------------------+ +----------------------------+ | serv1 | | serv2 | | /etc/passwd from LDAP | | /etc/passwd from LDAP | | /etc/group from LDAP | | /etc/group local/read only | +-----------------------+ +----------------------------+

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • Windows 7 SSH file server

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • Upgrading PHP, MySQL old-passwords issue

    - by Rushyo
    I've inherited a Windows 2k3 server running an XAMPP-installation from the stone age. I needed to upgrade PHP to facilitate an upgrade to MediaWiki to facilitate a new MediaWiki extension (to facilitate some documentation to facilitate doing my job to facilitate getting paid to facilit... you get the idea). However... installing a new version of PHP resulted in PHP's MySQL libraries refusing to communicate using MySQL's 'old style' 152-bit passwords. Not a problem in theory. The MySQL installation is post-4.1, so it should have the functionality to upgrade the user's passwords from 152-bit to 328-bit (what a weird hashing algorithm...). I ran the following: SET PASSWORD = PASSWORD('foo'); on MySQL but querying: SELECT user, password FROM mysql.user; returned just the same password I started out with - 152-bit. Now... I suspect you're thinking 'AHA! old-passwords is on!'. Unfortunately it's not - I've disabled it in the configuration (explicitly set it to 0), made doubly sure I have an absolute reference to that configuration file and ensured the service isn't using the --old-passwords flag. The service was reset after each and every operation. So I went onto another system and generated the 328-bit hash on there, copying the hash over to the first MySQL instance. Unfortunately, that didn't work either (I did remember to FLUSH PRIVILEGES). The application error is: "'mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool [...snip...] Is there anything else I can try to get PHP to recognise MySQL as not using the 'old insecure authentication'? MySQL seems to be stuck in 'old-passwords' mode and I can't get it out of it.

    Read the article

  • XP - ping changes routing table?

    - by Corelgott
    Hey Folks, I have got a real strange behaviour with one of my XP-Sp3 machines. Setup: A Server in the lan (192.168.5.0) proviedes access to all roadwarriors in 10.8.0.0 The DCHP has a static route for all clients pronouncing 192.168.5.235 as gateway for 10.8.0.0 All Clients can ping & access the vpn-machines; everything works like a charm But one Xp-Sp3 is not willing to connect to them. It gets all the same routes as any other sytem in the lan and I trippel-checked - there are no static routes on this machine When I ping any 10.8.0.0 device from this machine, the first two packaged work like a charm; but the next two (and any package after them) fail and get lost. When I look back into the routing table: There is a new route; a special one just for the device I pinged, which points to the right gateway - but which wasn't there earlier... As Long as this route exists the machine can't ping anything on 10.8.0.0. But if I remove the route by hand: The next to ping packages work fine... Has anybody got an idea about that? Anybody every seen such a behaviour? Any hint / help / tip is greatly appreachiated! thx in advance Corelgott Ps: I attach an image of the cmd to clarify things - its in german, but reading a routing table shouldn't be that hard...

    Read the article

  • Possible to have different SSLCACertificateFiles under different Location in Apache (client side ssl certs)

    - by Mikko Ohtamaa
    I am setting up Apache to do smartcard authentication. The smartcard login is based on client-side SSL certificates handled by an OS driver. I have currently just one smartcard provider, but in the future there are potentially several of them. I am not sure how Apache 2.2. handles client-side certifications per Location. I did some quick testing and it somehow seemed that only the last SSLCACertificateFile directive would have been effective and this doesn't sound right. Is it possible to have different SSLCACertificateFile per Location in Apache (2.2, 2.4) as described below or is SSL protocol somehow limiting that you cannot have more than one SSLCACertificateFile per IP? Example potential config below how I wish to handle several SSLCACertificateFile on the same server to allow users to log in with different smartcard provides. <VirtualHost 127.0.0.1:443> # Real men use mod_proxy DocumentRoot "/nowhere" ServerName local-apache ServerAdmin [email protected] SSLEngine on SSLOptions +StdEnvVars +ExportCertData # Server-side HTTPS configuration SSLCertificateFile /etc/apache2/certificate-test/server.crt SSLCertificateKeyFile /etc/apache2/certificate-test/server.key # Normal SSL site traffic does not require verify client SSLVerifyClient none SSLVerifyDepth 999 # Provider 1 <Location /@@smartcard-login> SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/ca.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Provider 2 <Location /@@smartcard-login-provider-2> # For real SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/provider2.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Connect to Plone ZEO client1 running on fg ProxyPass / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ ProxyPassReverse / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ </VirtualHost>

    Read the article

  • lenovo x1 carbon windows 8 frequent wifi disconnect issue

    - by hIpPy
    I'm having frequent wifi disconnects on my Lenovo X1 Carbon Touch laptop. I got this laptop 2 months back and it has been happening ever since about 3-5 times a day and 10 times a week on average. I've Frontier Fios internet. Power connected or not does not matter. Once I get disconnected, I try below to connect again in that order: turn Airplane mode on and off, troubleshoot network problems windows troubleshooter), restart the laptop I'd find that the WiFi adapter would get disabled and sometimes windows troubleshooting would help but more than often I'd end up restarting the laptop. A week back, I upgraded my wifi network adapter drivers (now Intel, version 15.5.6.48, 10/3/2012). I still get disconnected frequently but turning Airplane mode on and off gets me connected again. So the driver update did help. Windows 8 is updated. None of the other devices (nexus, iphone phones, nexus7, ipad tablets) would have wifi issues when my laptop would get disconnected. config: Intel(R) Centrino(R) Advanced-N 6205 (WiFi network adapter) Microsoft Windows 8 Pro Microsoft Windows [Version 6.2.9200] x64-based PC LENOVO System Model: 3443CTO X1 Carbon Touch I recently noticed this log message When I got disconnected in event viewer: Your computer was not assigned an address from the network (by the DHCP Server) for the Network Card with network address 0x[XXXXXXXXXXXX]. The following error occurred: 0x79. Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. Any idea?

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • What are the replacement options for an IDE hd for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hds have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hds are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • Can Haproxy deny a request by IP if its stick-table is full?

    - by bantic
    In my haproxy configs I'm setting a stick-table of size 5 that stores every incoming IP address (for 1 minute), and it is set as nopurge so new entries won't get stored in the table. What I'd like to have happen is that they would get denied, but that isn't happening. The stick-table line is: stick-table type ip size 5 expire 1m nopurge store gpc0 And the whole configs are: global maxconn 30000 ulimit-n 65536 log 127.0.0.1 local0 log 127.0.0.1 local1 debug stats socket /var/run/haproxy.stat mode 600 level operator defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms backend fragile_backend tcp-request content track-sc2 src stick-table type ip size 5 expire 1m nopurge store gpc0 server fragile_backend1 A.B.C.D:80 frontend http_proxy bind *:80 mode http option forwardfor default_backend fragile_backend I have confirmed (connecting to haproxy's stats using socat readline /var/run/haproxy.stat) that the stick-table fills up with 5 IP addresses, but then every request after that from a new IP just goes straight through -- it isn't added to the stick-table, nothing is removed from the stick-table, and the request is not denied. What I'd like to do is deny the request if the stick-table is full. Is this possible? I'm using haproxy 1.5.

    Read the article

  • PC won't boot from IDE HDD when SATA data drive connected

    - by Kevin
    I have an old Pentium 4 system running XP. The machine is set up as an HTPC. It was set up and running well with 1 SATA drive as a boot drive, another SATA drive to store TV recordings, and an IDE drive to store more recordings. Last week the original boot drive (a SATA drive) failed. The BIOS would no longer recognize it. I had a disused IDE drive hanging around that was large enough for the OS, so I reformatted it and installed XP on it. Now the system will only boot if I do not connect the remaining healthy SATA data drive. All three drives are recognized by the BIOS, and I have set the boot order so that the IDE drive with XP on it has top priority, but after the BIOS recognizes the drives, etc. I just get a black screen. I know the SATA drive is functional, because if I hot plug the drive AFTER the system is booted (I know I'm not supposed to do this), I can go into the control panels and mount the drive, and see all the files and folders on it in Windows Explorer. Any suggestions on what is going on and how to fix it? Many thanks.

    Read the article

  • Obtaining clear cable signal with Hauppage 1191-950Q TV Tuner

    - by Kyle B.
    I bought a Hauppauge 1191 WinTV-HVR-950Q TV Tuner a while back, and I am attempting to use it to watch TV in Windows Media Center. I do not have cable TV, however I used to have Comcast. Through experimentation, I found that connecting my coax cable into my TV, I was able to view television (i.e. 2-1 = CBS, 5-1 = NBC, etc). This also works on a second TV I use in another room. When I connect the coax cable to my TV Tuner stick and scan TV channels in Windows Media Center, it only picks up 1 station (like 81 or something). My only conclusion on this is somehow the hardware in the television is decrypting a signal that the TV tuner stick is not able to. Is that possible? Should I try a different TV tuner? Any assistance would be appreciated. I also bought a Terk HD Indoor Antenna but my reception has been flakey, so I would rather go this route if possible.

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >