Search Results

Search found 10206 results on 409 pages for 'tooling and testing'.

Page 218/409 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • How do I serve only internal intranet requests for a site with Apache?

    - by purpletonic
    I have an externally facing web server on our domain that we use for testing multiple sites. I have a site on this server that I want only people from within our intranet to view. How do I prevent requests originating from outside the intranet from seeing this website? I tried the following in my apache config file, but I get a 403 error. <Directory /> Options FollowSymLinks Order Deny,Allow Allow from domain.com Allow from 10.0.0.0/10.255.255.255 Deny from All AllowOverride None </Directory> <Directory /var/www/sitename/public> Options Indexes FollowSymLinks MultiViews Order Deny,Allow Allow from domain.com Allow from 10.0.0.0/10.255.255.255 Deny from All AllowOverride None </Directory>

    Read the article

  • Codec problem: Video noise / squares blinking as it play

    - by Havenard
    Its a weird and intermittent problem, it comes and goes with time. I thought I could fix it rebooting, but it seems that is not always. Take a look: This noise keeps blinking at the screen the entire movie. If I give up and try again an hour from now, it will be fine... weird! I have K-Lite Codec Pack installed, in my machine and many others. Its very cool, everything works perfect... but mine is the only one that does it. Do anybody know whats going on? Edit: Apparently they've just release the very latest version of K-Lite today (lol), with some bug fixes. I'll be testing it. Last time I reainstalled K-Lite the problem was gone, but came back again...

    Read the article

  • How best to set up MDT developement and production?

    - by nray
    What's your MDT 2010 test and prod setup? What do you consider best practice? Is it best to use linked deployment shares, and replicate from development to production when testing is complete? What about backing out, if something breaks? Does anyone run MDT shares in DFS, or is there no support in the WinPE boot image for DFS shares? Or what about moving the production share name from one deployment share to another, as you add and test more OS versions, drivers, attributes, etc?

    Read the article

  • Reputable web based ssh client? [closed]

    - by Doug T.
    I'm connected to a coffee shop's wireless network right now, and I suspected I'd be able to use my laptop and ssh somewhere. Unlucky me they seem to be blocking everything but web traffic (my testing seems to show everything but port 80 is working, can't ping, ftp, etc). I googled "web based ssh clients" however I have reservations about entering my login credentials on any Joe Schmoe's web app. I was wondering if anyone has had any experience with any reputable web based ssh clients? If so could you please point me at one that I could trust?

    Read the article

  • CentOS listen to everything on the wire

    - by Poni
    I know there's a native command on linux that will output (to stdout) every "event" related to a certain network interface (be it eth0 etc'). Like there's tail -f <file> to listen on file changes.. I just can't find it. I want to see all events, incoming packets, even dropped ones. At lowest level possible. In every protocol (TCP, UDP etc'). I think WireShark is a bit too big for this as I need something very simple just to see the events, it's for testing. What's the command?

    Read the article

  • Python server does not excecute PHP script: permission denied

    - by krisvandenbergh
    I am trying to execute a PHP file through a Python server. However, I get the following error: File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/CGIHTTPServer.py", line 255, in run_cgi os.execve(scriptfile, args, os.environ) OSError: [Errno 13] Permission denied The python server is running though. What have I done so far? Chmod'ed recursively all directories to (chmod -R a+x) (I know this is not secure but its just for testing purposes) for both Python installation directories and my scripts. Tried to find out if python server is running as root through ps aux grep py I am out of ideas. What could be going wrong else? Thanks for the feedback.

    Read the article

  • download speed is fast but transfer rate is slow

    - by Ieyasu Sawada
    I've just changed ISP and I'm pretty disappointed with the transfer rate. My previous connection has a download speed of 1.08 Mb/s as seen from this site: http://speedtest.net and the download transfer rate is about 100kb/s for sites that doesn't limit their bandwidth. Now my connection has about 2Mb/s download speed but the transfer rate is dancing from 20-50kb/s . I was expecting a speed much higher than this because of the download speed that I'm getting when I'm testing. The question is what's the difference between transfer rate and download speed, is it normal to have a high download speed but low transfer rate, should the download speed be proportional to the transfer rate?

    Read the article

  • site timing out when under heavy load

    - by naunu
    My client sends out eblasts at 8am monday/wed/friday. Between 8:15-8:45 the site becomes extremely slow and many users sessions timeout. My setup: Mediatemple VE 2gb dedicated ram (3 burst) Ubuntu 9.10 Apache2-mpm-worker PHP5.3-fcgi MySQL 5 I recently tried to remedy the problem by switching from apache2-mpm-prefork to mpm-worker, but am still having the same issues. My apache settings are: Timeout 100 KeepAlive On MaxKeepAliveRequests 100 <IfModule mpm_worker_module> StartServers 12 MinSpareThreads 25 MaxSpareThreads 96 ThreadLimit 96 ThreadsPerChild 25 MaxClients 225 MaxRequestsPerChild 0 </IfModule> The site is only getting ~10,000 page views during the 8am-9am hour, which I dont think should be stressing the server too badly. Maybe it is an error with the PHP settings, or bandwidth per unit time, or the site outgrew the server? Any suggestions would be very helpful - as you can see i've given it a good go before looking for help (installed mpm-worker). Also, can anyone suggest to me some free load testing software, or a tutorial on mod_status? Thank you

    Read the article

  • OpenVPN Push DNS Not Working Correctly On Windows

    - by woodsbw
    I currently have OpenVPN server setup on an Ubuntu machine, as well as DNSMasq. I am wanting to push DNS to the client (road warrior setup.) I had the push "dhcp-option DNS x.x.x.x" where x.x.x.x was an open OpenDNS server, for testing, and everything was working when I connected from my Windows client But now that I have DNSMasq setup, and I changed the "dhcp-option DNS x.x.x.x" to the DNSMasq server, but when they client connects it still receives the old, OpenDNS DNS server IP. I'm at a bit of a loss here, I have tried flushing DNS on the client, rebooting the server, and I even grep'd the entire server to see if the OpenDNS IP was in some other config I was missing...it wasn't. One other note, when connect to the VPN and explicitly run nslookup against against the DNSMasq IP, the addresses resolve correctly, so it isn't a DNSMasq issue.

    Read the article

  • Are animated GIFs supported in Google Chrome?

    - by James Goodwin
    I have recently been testing a website and found animated gif images that seem to show fine in IE and Firefox but in Google Chrome they only show briefly and then dissapear! This happens if I view the image on the page or view the file directly. Are there any reported problems in displaying GIFs in Chrome, or is it just being fussy? There seemed to have been some problems in older versions of Chrome, but it's hard to believe something as simple as this wouldn't have been fixed by now. The version of Google Chrome I am using is: 4.1.249.1021 Not sure if this is relevant, but some info about the image: Width: 216 pixels Height: 36 pixels Horizontal resolution: 96dpi Vertical resolution: 96dpi Bit Depth: 32 Frame Count: 3 EDIT: Seems to be a problem relating to the latest beta version of Chrome, as it works fine in 4.0.249

    Read the article

  • Good default for XDG_RUNTIME_DIR?

    - by cadrian
    The XDG Base Directory Specification is a very interesting spec for user directories. It also provides good default values, except for XDG_RUNTIME_DIR. Now I am writing a software that needs to create named pipes. It is a per-user client-server framework (there is a FIFO for the server and a FIFO per client). If XDG_RUNTIME_DIR is not defined, I am currently using a per-user subdirectory in /tmp — but it does not ensure all the specified conditions (viz. the paragraph starting with "The lifetime of the directory MUST be bound to the user being logged in…") Is /tmp/myserver-$USER good enough? Edit I saw elsewhere a few suggestions: . is quite unsatisfactory (at least because it is not an absolute path). I also saw /var/run/user/$USER — not bad, but that directory does not exist (at least on my box running a Debian testing)

    Read the article

  • Graphics card failure, anything I could try...

    - by ILMV
    My gaming PC has decided to die, it's not the first time but usually a quick ATX reset brings it back to life. Today it didn't. I disconnect all unessasary devices so I've only got the case button / LED cables, GPU, CPU, RAM and power connected, the computer still didn't turn on. I've not got a speaker on my motherboard so found a spare one I have for testing and when the machine starts up I get one long beep and two short beeps from my Award BIOS, which apparently means a video card error. I change it with the GPU from another machine and all works well. Q: So I have a faulty graphics card (an nVidia 8800GT OC), is there anything I can try to resurect it?

    Read the article

  • Ruby, Rails & MySQL parity between Mac Client (10.6) & XServe (10.5)

    - by Meltemi
    We're setting up a RoR setup with Development on Mac OS X Client (10.6.3) and then using a Mac OS X Server (10.5.8) for testing and eventually deployment. I'd like to get as many systems in sync on these machines as possible. Wondering if there are any pitfalls. I seem to understand what's necessary under Client but Server has some hardwired stuff that I want to make sure doesn't break...or is updated correctly. Currently installed on both machines we have: OS X Client (10.6.3): Ruby 1.8.7 Rails 2.3.5 MySQL (not installed yet) OS X Server (10.5.8): Ruby 1.8.6 Rails 2.3.5 MySQL Ver 14.12 Distrib 5.0.82 Any suggestions...Ideally from someone who's done this on Leopard Server as well but I'll listen to general tips & proceedures

    Read the article

  • Visual Query Builder

    - by johnnyArt
    If been using "dbForge Query Builder" lately and I'm gotten used to the ease of building and testing a query, specially for those complex ones with inner joins, aliases and multiple conditionals. The expiry date of the trial is about to come, and while wanting to remain on the legal side for this I'd rather not pay the 50USD it costs (although I must say it's pretty cheap for what it does). So my question would be: Are there any free alternatives to replace this visual query builder? I've failed to find any and fear that my only two options are paying for it, or going to the dark side.

    Read the article

  • rsync to windows (cygwin)

    - by abergmeier
    We have a windows file storage (don't ask) and now I want to rsync with the machine from Windows, Mac and Linux. So I installed freeSSHd (login shell is set to C:/cygwin64/bin/sh.exe), set up certificates and testing from Linux the test.dat has 0 bytes: ssh myuser@winmachinename "C:/cygwin64/bin/true.exe" > test.dat Even double checking with actual output works fine: ssh myuser@winmachinename "C:/cygwin64/bin/ls.exe" > test.dat Now, when I call rsync: rsync --progress -avz -e ssh myuser@winmachinename:/c/Users ~/test it fails with: protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(174) [Receiver=3.1.0] As far as reading the docs, this should not happen, when the first test is successful!? I am by now out of ideas - any recommendations how to debug this? EDIT: | OS | rsync version | |:--------------|:------------------------------------------| | Windows | rsync version 3.0.9 protocol version 30 | | Linux | rsync version 3.1.0 protocol version 31 |

    Read the article

  • Redmine: reposman.rb succeeds, but does not make SVN repos available to projects

    - by Joey Adams
    I'm testing reposman.rb on the command-line (before I make it a cron job): /usr/sbin/reposman.rb --svn-dir=/var/svn \ --redmine-host=http://example.com/projects --key='redacted' \ --owner='nobody' --group='nobody' It succeeded, printing messages for projects that didn't have repos yet: repository /var/svn/project1 created repository /var/svn/project2 created And printed nothing after running the same command again, indicating it remembered the repos. However, if I look at the Repository settings in Redmine for project1 and project2, they aren't set. Although the SVN repo is created, the Redmine projects aren't configured. How do I get reposman.rb to automatically configure Redmine projects to use the repos after they're set up?

    Read the article

  • Win 2008 single server development environment (architecture)

    - by Tommy Jakobsen
    I have a few questions about a test development environment that I’m setting up on this server: Intel Core i7-920 Quadcode incl. Hyper Threading 8 GB DDR3 RAM 2x 750 GB SATA-II (probably software RAID 1) The server is going to support max 5 users, maybe 10 when stressed. I was hoping that I could run all the following products on the same server: Windows Server 2008 R2 x64 w/ IIS SQL Server 2008 x64 (R2 when released) Team Foundation Server 2010 Sharepoint Foundation 2010 I know this sounds overkill, but remember that this is for development purpose and testing. This is not a production environment. My question if this will be possible at all? Should I run it all on one Windows 2008 installation, or should I run it in multiple virtual environments using Hyper-V? What do you think?

    Read the article

  • Security question pertaining web application deployment

    - by orokusaki
    I am about to deploy a web application (in a couple months) with the following set-up (perhaps anyways): Ubuntu Lucid Lynx with: IP Tables firewall (white-list style with only 3 ports open) Custom SSH port (like 31847 or something) No "root" SSH access Long, random username (not just "admin" or something) with a long password (65 chars) PostgreSQL which only listens to localhost 256 bit SSL Cert Reverse proxy from NGINX to my application server (UWSGI) Assume that my colo is secure (Physical access isn't my concern for the time being) Application-level security (SQL injection, XSS, Directory Traversal, CSRF, etc) Perhaps IP masquerading (but I don't really understand this yet) Does this sound like a secure setup? I hear about people's web apps getting hacked all the time, and part of me thinks, "maybe they're just neglecting something", but the other part of me thinks, "maybe there's nothing you can do to protect your server, and those things are just measures to make it a little harder for script kiddies to get in". If I told you all of this, gave you my IP address, and told you what ports were available, would it be possible for you to get in (assuming you have a penetration testing tool), or is this really protected well.

    Read the article

  • vncserver too many security failures

    - by cf16
    I try to connect to my vncserver running on CentOs from home computer, behind firewall. I have installed Win7 and Ubuntu both on this machine. I have an error: VNC conenction failed: vncserver too many security failures even when loging with right credentials (I reset passwd on CentOs). Is it something regarding that I try as root? I think important is also that I have to login to remote Centos through port 6050 - none else port works for me. Do I have to do something with other ports? I see that vncserver is listening on 5901, 5902 if another added - and I consider connection is established because from time to time (long time) the passwd prompt appears,... right? please help, what to do? even if prompt appeared and I put correct password I get: authentication failure. how to disable this lockout for a testing purposes?

    Read the article

  • Problems enabling BitLocker on Windows 7 enterprise

    - by ericl42
    I had BitLocker turned on originally when I loaded the computer but had to turn it off to do some testing. I recently tried to turn it back on and I continue to get the following error: Blockquote A required TPM measurement is missing. If there is a bootable CD or DVD in your computer, remove it, restart the computer, and turn on BitLocker again. If the problem persists, ensure the master boot record is up to date. Blockquote I have verified that there is nothing in the DVD tray and that the laptop is not docked. I have also verified that TPM is running and I have no problems enabling BitLocker on a flash drive. I think it's a problem with my MBR since I am dual booting into Fedora as well but I am not sure how to fix it. (even though it did work a few months ago while I was also dual booted) Thank you for the help.

    Read the article

  • Choosing between cloud (Cloudfoundry ) and virtual servers - for developers

    - by Mike Z
    I just came across some articles on how to setup your own cloud using Cloudfoundry and Ubuntu, this got me thinking, choosing our infrastructure, if we want to use our own servers what's the advantage of cloud on virtual servers vs just using virtual servers, VPN? If we now develop for the cloud later if we need help we can quickly move on to a cloud provider, but other than that what's the advantage and disadvantage of private cloud in these areas? speed of development, testing, deployment server management security having an extra layer (cloud) will that have a hit on server performance, how big? any other advantage/disadvantage?

    Read the article

  • CentOS 6.5 proxy bypass/no_proxy not working

    - by Naruto Uzumaki
    I am running CentOS 6.5 on my desktop. I've set the Network Proxy using the network proxy application provided under Preferences. I've also set the following exceptions: localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16 But whenever I am using wget (I'm testing the proxy settings using using wget) then wget tries to connect to the proxy for private addresses, but wget localhost works fine and doesn't use the proxy. I also removed all the proxy settings and set the proxy in the shell: export http_proxy="<proxy_url>:<port>" export https_proxy="<proxy_url>:<port>" export no_proxy="localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16" It work when I use the command wget <external_url> or wget localhost but fails when I use the command wget <private address from the $no_proxy variable>. I also tried setting the variables in Ubuntu 14.04 also and facing the same issue. Regards,

    Read the article

  • Virtualmin - Added Virtual Server - Stopped access to Rails app?

    - by Dan
    Hi, Sorry if this sounds pretty simple, I'm new to Virtualmin and running servers in general. I recently purchased a VPS and installed Virtualmin with no problems. I then installed mod_rails and uploaded my first rails app, which I got working by adding the following to my apache httpd.conf file: <VirtualHost *:80> ServerName testing.mydomain.com DocumentRoot /home/myapp/public <Directory /home/myapp/public> Allow from All AllowOverride all Options -MultiViews </Directory> RailsBaseURI / </VirtualHost> I then tried adding a virtual server through Virtualmin, using mydomain.com. Now, the site this created (plus several sub-servers) and working as expected. However, my original Rails app is no longer accessible. The URL now sends me to the parent application (ie mydomain.com) The Rails app is not located within the parent's application directory, would this be a problem? Can anyone help? Any advice appreciated. Thanks.

    Read the article

  • How to detect hard disk failure?

    - by Devator
    So, one of my servers has a hard disk failure. It's running software RAID, the system locked up and according to /proc/mdstat (and /var/log/messages), it's really down: Personalities : [raid1] md2 : active raid1 sdb2[1] 104320 blocks [2/1] [_U] md5 : active raid1 sdb5[1] 2104448 blocks [2/1] [_U] md6 : active raid1 sdb6[1] 830134656 blocks [2/1] [_U] md1 : active raid1 sdb1[1] 143363968 blocks [2/1] [_U] and Nov 5 22:04:37 m38501 smartd[4467]: Device: /dev/sda, not capable of SMART self-check However when I do smartctl -H /dev/sda, it passes the test. It also passes the test with smartctl --test=short /dev/sda. So, is smartctl a broken testing tool, or am I doing something completely off?

    Read the article

  • Could not Upload file in network mapped drive using asp.net/vb.net

    - by Hasan
    I have tried several times to upload file remotely to a mapped network drive, but it is raising an exception: Could not find a part of the path 'X:\test\testing.wav'. I read through various internet /blog/ Microsoft help sites, but I still don't know what is wrong. Does anyone know what is causing this problem and how I can correct it? It works fine when I am uploading to a local drive as a test. It is also working When I am running the code from the development server, but if I try with published code, then it fails. :(

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >