Search Results

Search found 1886 results on 76 pages for 'tom kerr'.

Page 26/76 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Connecting to SVN server from a computer outside of my LAN

    - by Tom Auger
    I've got a Fedora server running Subversion and svnserve on port 3690. My repo is at /var/svn/project_name. I have my router forwarding port 3690 to the local server (as well as port 80, 21, 22 and a few others). When I connect locally to svn://192.168.0.2/project_name it works great. When I connect from an external server to svn://my.static.ip/project_name I get a time out connecting to the host. However, if I http://my.static.ip there is no problem, so port forwarding is working (at least for port 80). I don't want to run WebDAV or svn via HTTP/s. I'd like it to work using svnserve, as documented in the svn book. What have I misconfigured? EDIT Here is the last part of my iptables dump. I'm not an expert, but it looks OK to me: ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:svn ACCEPT udp -- anywhere anywhere state NEW udp dpt:svn ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:6680:6699 ACCEPT udp -- anywhere anywhere state NEW udp dpts:6680:6699 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited EDIT 2 Results from sudo netstat -tulpn tcp 0 0 0.0.0.0:3690 0.0.0.0:* LISTEN 1455/svnserve

    Read the article

  • "Countersigning" a CA with openssl

    - by Tom O'Connor
    I'm pretty used to creating the PKI used for x509 authentication for whatever reason, SSL Client Verification being the main reason for doing it. I've just started to dabble with OpenVPN (Which I suppose is doing the same things as Apache would do with the Certificate Authority (CA) certificate) We've got a whole bunch of subdomains, and applicances which currently all present their own self-signed certificates. We're tired of having to accept exceptions in Chrome, and we think it must look pretty rough for our clients having our address bar come up red. For that, I'm comfortable to buy a SSL Wildcard CN=*.mycompany.com. That's no problem. What I don't seem to be able to find out is: Can we have our Internal CA root signed as a child of our wildcard certificate, so that installing that cert into guest devices/browsers/whatever doesn't present anything about an untrusted root? Also, on a bit of a side point, why does the addition of a wildcard double the cost of certificate purchase?

    Read the article

  • Can Squid 2.7 proxy gzipped content

    - by Tom Styles
    We have a forward proxy for our network which is Squid 2.7. This is managed for us by a third party. We noticed recently that http requests going from our network to the web were having the Accept-Encoding header removed. This was resulting in all web traffic across our network (approx 8000+ PCs) being uncompressed even though the browsers and server on each end were capable. We have asked the third party to look into this and they have said it is because Squid 2.7 does not support compression. I understand this to be true but I was under the impression that the compression happened on the webserver rather than the proxy. So... Can Squid 2.7 proxy and/or cache content that is gzipped? If it can, how/why might it be configured such that the Accept-Encoding header is being removed?

    Read the article

  • SSD performance

    - by Tom
    I recently upgraded to a Kingston Hyper-X 120GB SSD, when I run Crystaldiskmark my scores look really slow, my MB (gigabyte 775) does not have an option for ACHI in the BIOS, I'm wondering if that's an issue. The scores were: Seq read -233 write-176.8 512K-224 write-175.8 4K-25 write-80 4K-23 write-102 This drive is rated for over 500, Any help or input would be greatly appreciated..

    Read the article

  • can't register a soft phone to asterisk11

    - by Tom
    I have a VM (on oracle vbox) running Fedora17. I've installed asterisk 11 on it from sources. I've followed the wiki for installation (https://wiki.asterisk.org/wiki/display/AST/Creating+SIP+Accounts) to the letter. The ip on the VM machine running fedora is 192.168.1.7 and I can ping it from the host machine (Ubuntu 12.04), which is at 192.168.1.2 I've tried registering with ekiga with the following settings: user: [email protected]. Password: verysecretpassword registar: 192.168.1.7 but I'm getting an error "transport fail". Also, while trying to register I'm logged in to the asterisk CLI with verbose level 3 and debug level 4 and nothing appears. some more relevant data: I've added the following code to the end of my sip.conf.sample file: [demo-alice] type=friend host=dynamic secret=verysecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 [demo-bob] type=friend host=dynamic secret=othersecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 After I changed the sip.conf.sample file, I've created a copy of it and named it sip.conf. then I logged in to the asterisk CLI and typed sip reload. Then I'm trying to register and ekiga client from my host machine at 192.168.1.2 but it doesn't work and nothing appears on the asterisk CLI while in verbose mode level 3. BTW, If there is missing information about my question, please don't close it. comment about what you need to know and I'll edit it in to the question. tnx.

    Read the article

  • How can I start any application with Guest permissions by default?

    - by Tom Wijsman
    Here are my two questions: How can I start any application with Guest permissions by default? How can I set certain applications not to launch with Guest permissions? For the first bullet, any non-Microsoft signed application I launch should run as the Guest account. For the second bullet, I'm imagining adding menu entries like this would be a nice approach: Set to run as Guest (= default selected entry) Set to run as User Set to run as Admin But how do I do this?

    Read the article

  • Which program is locking all my executable files?

    - by Tom Wijsman
    When updating any software product, as well as manually trying to replace .exe files, it says that access is denied to the file and in fact the System process is holding a handle to the file when I check it with Process Explorer. This must be a driver or something that is malfunctioning was my first though, but now I wonder how I figure out which driver / program is doing this and why it is so. Unlocker doesn't seem to be working for me, unless someone can tell me how to use it properly other than making it appear a magical wand in the notification area.... This is what Unlocker puts in my event log: The description for Event ID 1060 from source Application Popup cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: \??\C:\Program Files (x86)\Unlocker\UnlockerDriver5.sys the message resource is present but the message is not found in the string/message table Upon searching event 1060 I get: <file name> has been blocked from loading due to incompatibility with this system. Perhaps it is because I have 64 bit?

    Read the article

  • Ubuntu's gui "Open With" command is not memorizing the application

    - by Tom Brito
    In Ubuntu, when we right-click a icon and use "open with", there is an option to remember that application for that file's type. For some reason, my system is no more memorizing the application (I think it was aftare some update). And, even when I have already used the "open with" command, the application is not showing in the "open with" list, so I have, everytime, to go to "open with"-"Other Application". Any hint to solve this?

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Is it necessary to have firewalls rules between trusted nodes communicating on their backend interfaces?

    - by Tom
    I have 6 nodes that have internet access on eth1 and private access to one another on eth0. Currently I have firewall rules for eth0, for things like memcached and NFS. Is this necessary? It's a real headache as NFS for example communicates on loads of different ports, and I recently introduced glusterfs which needs more still. Is the headache of figuring out what backend ports to unblock worth the security enhancement? I should mention that I will of course still have a firewall rule on eth0 to block servers owned by others in the same datacenter. Thanks

    Read the article

  • Can I restore Windows 8/install Windows 7 from BIOS?

    - by Tom
    I recently got an ASUS K55A series laptop with Windows 8 on it, and have been trying to load Windows 7 on it for days to no avail, and recently I discovered how to get my Windows 7 install DVD to boot from the BIOS, but I deleted all of my Windows 8 system information from both partitions of my HDD and Windows 7 setup says it cannot install on the disc because of a partition format issue. I did not delete the recovery HDD partition for Windows 8, but I can't get the HDD to show up in my boot menu in BIOS, and none of the F keys work to get to recovery mode (only DEL and F2 work to get me into BIOS)

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Changing 127.0.0.1:81 to Internal Domain Name ?

    - by Tom
    Hi, I was wondering the steps I can take to change the localhost name to a test development domain name like "website.dev" on Win7 x64 ? Currently, when my test website builds its assigned to 127.0.0.1:81 but I want this to instead have a name like "website.dev" that is accessible on my local network ? [and sure any Virtual PCs built on this local PC] I think this is done via the hosts file but I am little unsure how to do this ? Would someone be able to assist ? Thx

    Read the article

  • How to detect bots programatically

    - by Tom
    we have a situation where we log visits and visitors on page hits and bots are clogging up our database. We can't use captcha or other techniques like that because this is before we even ask for human input, basically we are logging page hits and we would like to only log page hits by humans. Is there a list of known bot IP out there? Does checking known bot user-agents work?

    Read the article

  • How can I make my browser(s) finish AJAX requests instead of stopping them when I switch to another page?

    - by Tom Wijsman
    I usually need to deal with things on a page right before switching to yet another page, this ranges from "liking / upvoting a comment or post" up to "an important action" and doesn't always come with feedback on whether the action actually proceeded. This is a huge problem! I assume the action to proceed once I start the particular AJAX request, but because I switch to another page it didn't actually happen because the AJAX request got aborted. This has left me several times with coming back to the page and seeing my action didn't take place at all; to give you an idea how bad this is, this even happened once when commenting on Super User! Is there a way to tell my browser to not drop these AJAX connections but simply let them finish?

    Read the article

  • Would upgrading memory from 4GB to 8GB on my laptop solve swapping issues?

    - by Tom
    I have a laptop with 4GB of memory with Windows 7 on it and I often experience with Eclipse that it is swapped out to disk. On the net they usually write 4GB of RAM is more than enough for average use and aside from Eclipse+Android Emulator I don't really use other extra apps, yet Eclipse is always swapped out if I haven't used it for a while (say, 1 day) and it is annoying it to wait for it to be resurrected from swap. My question is: would an upgrade to 8GB solve the issue of swapped out applications? With 8GB would windows 7 keep everything in memory? Or it wouldn't change anything and Eclipse would be swapped out regardless of the amount of memory, because Win 7 has a habit of kicking out every application from memory which hasn't be used for a while?

    Read the article

  • automatically change the gnome-terminal "title" for the window

    - by tom
    Hi. Trying to change the title of a current gnome-terminal (similar to the "set title" that you can do manually") The system is running Fedora 9. The HowTo Xterm-Title discusses how to set the prompt, for an xterm. Tried to implement the escape sequences with no luck. (might be something weird..) Tried to use the gconftool to dump/change/load the changed conf attributes, and again, no luck. Also, set the PROMPT_COMMAND just in case the prompt command was somehow changing the title back (which is highly doubtful) Searching the 'net indicates that a few people have tried to solve this with no luck... I'd also like to figure out how to create a new gnome-terminal with a unique specified title... once this is solved, i'l gladly create a quick writeup/post onn how to accomplish this for others... thanks

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • How can I figure out which PHP extensions aren't being used?

    - by Tom Marthenal
    I manage a server (running Ubuntu) which hosts our client's sites with a few dozen different PHP-based websites, mostly small sites but also some installations of CMSes and forums. I used the get_loaded_extensions() method to see what extensions I have loaded. To help streamline the server (remove unnecessary extensions to make upgrading easier and marginally improve speed), I'd like to remove extensions that aren't being used by any of the sites. I currently have 54 different extensions loaded. I can easily eliminate some of these from the list which I know are used, but others I am less sure about. Is there some way that I can see extensions which have not been used recently?

    Read the article

  • Centos 6, local yum repo, and multiple versions of the same rpm

    - by Tom Skelley
    I'm trying to set up a really simple local repo. I want to have a basic repo with two versions of only one rpm, so I did: mkdir /packages/x64 copy two rpms to /packages/x64 [root@repo x64]# createrepo --verbose /packages/x64 1/2 - jre-6u37-linux-amd64.rpm 2/2 - jre-7u9-linux-x64.rpm Saving Primary metadata Saving file lists metadata Saving other metadata Added the repo to /etc/yum.repos.d/local.repo But when I do: [root@repo x64]# yum list jre I get: Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile Available Packages jre.x86_64 1.7.0_09-fcs local ie it only shows the latest version. I know that they're both in the repo because I've run this: [root@repo x64]# rpm -qp jre-6u37-linux-amd64.rpm jre-1.6.0_37-fcs.x86_64 [root@repo x64]# rpm -qp jre-7u9-linux-x64.rpm jre-1.7.0_09-fcs.x86_64 and when I remove the latter version, and run createrepo again, the former shows up. Most puzzling, what am I missing?

    Read the article

  • mysql 5.1 - innodb - query_cache_size - 9,418,108 queries have been removed from the query cache due to lack of memory

    - by Tom C
    Currently running on a 16GB system - Ubuntu 64 bit. INnodb Buffer Pool is set to 10GB. tuning-primer shows the following: QUERY CACHE Query cache is enabled Current query_cache_size = 512 M Current query_cache_used = 501 M Current query_cache_limit = 4 M Current Query cache Memory fill ratio = 97.87 % Current query_cache_min_res_unit = 4 K However, 9418108 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size That is over 9million queries removed. System uptime is 8 days. Should I remove the Query Cache altogether? Our db is always under heavy I/O. tia

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

  • Why does try_files append each path together?

    - by Tom
    I'm using try_files like this: http { server { error_log /var/log/nginx debug; listen 127.0.0.1:8080; location / { index off default_type application/octet-stream; try_files /files1$uri /files2/$uri /files3$uri; } } } In the error log, it's showing this: *[error] 15077#0: 45399 rewrite or internal redirection cycle while internally redirecting to "/files1/files2/files3/path/to/my/image.png", client: 127.0.0.1, server: , request: "GET /path/to/my/image.png HTTP/1.1", host: "mydomain.com", referrer: "http://mydomain.com/folder" Can anyone tell me why nginx is looking for /files1/files2/files3/path/to/my/image.png instead of /files1/path/to/my/image.png, /files2/path/to/my/image.png and /files3/path/to/my/image.png? Thanks

    Read the article

  • Computer freezes when CD/DVD is enable, computer HP Pavilion dv98278 Notebook OS - Vista

    - by tom
    As the title states, when my CD/DVD drive is enable my computer freezes showing a diagonal patter on the monitor. This is what has been attempted (DF - didn't fix): (1) cleaned regsitry file - DF; (2) uninstalled and installed driver HL-DT-ST DVDRAM GSA T20L ATA - DF; Yelled profanities at the computer - DF but I felt better. I want to install OS 7 but this makes it much more complicated. It's appearing as a hardware issue to me, however, what do I know. Any suggestions?

    Read the article

  • Amazon S3: allow users to upload on a restricted basis (per bucket maybe)?

    - by Tom
    Hi there, I'm thinking about signing up to the Amazon S3 storage service. What I want to do is create a service where other people can register their own bucket with a certain amount of storage. These users will install my software, which then uploads their files. Of course, the users may only upload what they have paid for. For this to work I would like to create a separate bucket for each customer, each with its own properties. Question 1: is this possible with the API? How? This means that the installed software must have the rights needed to upload to my Amazon S3 account. Question 2: can I create individual authentication IDs for each bucket or customer, so that they can only upload with restrictions I have set? Thanks in advance.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >