Search Results

Search found 47679 results on 1908 pages for 'web admin'.

Page 1048/1908 | < Previous Page | 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055  | Next Page >

  • Configuration for httphandler in classic mode

    - by happyspider
    I have to install an httphandler that needs to run on classic mode. I have created an application on the iis that uses a classic apppool and put the handler assembly there. The vendor gave me a configuration in the deployment document that looks like this: <system.web> <globalization requestEncoding="iso-8859-1" responseEncoding="iso-8859-1" /> <httpModules> </httpModules> <httpHandlers> <add verb="*" path="*" type="ProductName.ProductName, ProductName" /> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="someUnspecificName" path="*" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="None" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> </handlers> </system.webServer> The error I get when requesting a URL on the application is a 404, so I guess the handle is not used at all. Does the configuration look ok for a 64bit system?

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • Do any well-known CAs issue Elliptic Curve certificates?

    - by erickson
    Background I've seen that Comodo has an elliptic curve root ("COMODO ECC Certification Authority"), but I don't see mention of EC certificates on their web site. Does Certicom have intellectual property rights that prevent other issuers from offering EC certificates? Does a widely-used browser fail to support ECC? Is ECC a bad fit for traditional PKI use like web server authentication? Or is there just no demand for it? I'm interested in switching to elliptic curve because of the NSA Suite B recommendation. But it doesn't seem practical for many applications. Bounty Criteria To claim the bounty, an answer must provide a link to a page or pages at a well-known CA's website that describes the ECC certificate options they offer, prices, and how to purchase one. In this context, "well-known" means that the proper root certificate must be included by default in Firefox 3.5 and IE 8. If multiple qualifying answers are provided (one can hope!), the one with the cheapest certificate from a ubiquitous CA will win the bounty. If that doesn't eliminate any ties (still hoping!), I'll have to choose an answer at my discretion. Remember, someone always claims at least half of the bounty, so please give it a shot even if you don't have all the answers.

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • What does Apache's "Require all granted" really do?

    - by John Crawford
    I've just update my Apache server to Apache/2.4.6 which is running under Ubuntu 13.04. I used to have a vhost file that had the following: <Directory "/home/john/development/foobar/web"> AllowOverride All </Directory> But when I ran that I got a "Forbidden. You don't have permission to access /" After doing a little bit of googling I found out that to get my site working again I needed to add the following line "Require all granted" so that my vhost looked like this: <Directory "/home/john/development/foobar/web"> AllowOverride All Require all granted </Directory> I want to know if this is "safe" and does not bring in any security issues. I read on Apache's page that this "mimics the functionality the was previously provided by the 'Allow from all' and 'Deny from all' directives. This provider can take one of two arguments which are 'granted' or 'denied'. The following examples will grant or deny access to all requests." But it didn't say if this was a security issue of some sort or why we now have to do it when in the past you did not have to.

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • ports only available from the outside network

    - by ChrisJ
    This is a counter-intuitive problem for me. I have a new Win 2003 server on a static IP address w.x.y.z. Tomcat 7, PostgreSQL 9.1, and Subversion are installed. All of it appears to be working fine from the server itself. We can also access the Tomcat manager, web applications, and run "svn ls svn://w.x.y.z/" from outside our network. However, when I try from another machine in the office, phpPgAdmin and svn cannot establish connections with the server. http://w.x.y.z:5432/phppgadmin cannot connect. The svn command from above returns: svn: E730061: Unable to connect to a repository at URL 'svn://w.x.y.z/' svn: E730061: Can't connect to host 'w.x.y.z': No connection could be made because the target machine actively refused it. Tomcat manager and the other web apps we have deployed work fine. Netstat -a from the server shows this: Proto Local Address Foreign Address State TCP SERVERNAME:3690 SERVERNAME:0 LISTENING TCP SERVERNAME:5432 SERVERNAME:0 LISTENING Windows Firewall was off, but just in case I also tried to enable it and open ports 3690 (svn) and 5432 (postgres). No change. I don't have access to the router/switch because it just doesn't work that way in Port-au-Prince and our sysadmin is on R&R. Is there anything that might be causing the problem from the server side?

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Windows 2012 RDS Temporary profile for Administrator

    - by Fabio
    I've configured a Windows 2012 RDS Farm with two virtual servers (VMWare - each one on a different ESX server). Both servers have Licensing, Web Access, Gateway, Connection Broker and Session Host roles. High Availability is set up and it works fine. Remote Apps are working and even Windows XP clients have access to the web interface. User profile path is \vmfiles1\UserProfileDisks\App\ and almost everyone has full right access to it. The problem I have is that I would like to be able to access both servers at the same time with the Administrator account (console), but each time I try, the second server that I logon to give me access with a temporary profile. I tried to enable/disable multiple sessions per user and forced Admin logoff with the GPO but nothing changed. Another thing is that the server pool is not saved, so each time I restart the RDS server or I logoff from it, I have to add a server in the server manager. Do you have any idea? Sorry if my english is not perfect.

    Read the article

  • IIS and ASP.NET

    - by sam
    i'm trying to add asp.net feature on windows 7 i tried to turn it on using turn windows features on or off but it fails every time so i download web platform installer and try it that way and it fails also next i uninstall .net framework 4 restart again! and reinstall it and try again the previous steps but it fails the same i need this installed so i can view it on iis7 anyone know what i can do with this to get it working i've searched and searched and everything fails i get this error on the web platform installer Failed with 0x80070643 – Fatal Error during installation please help i cant do my work with out it working :( ok i did a few things now get this error Server Error in '/pulse' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'pulsesite.MvcApplication'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.vb" Inherits="pulsesite.MvcApplication" Language="VB" % Source File: /pulse/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 i know its ust about changing the code but i'm not good with c# anyone know how?

    Read the article

  • EC2: is an instance's public DNS stable? Can I rely on it not changing?

    - by Aseem Kishore
    I'm new to Amazon EC2. I've launched my first instance, and am using it as a web server. I see that it has a public DNS (a public URL), e.g.: ec2-123-45-6-789.compute-1.amazonaws.com I can successfully go to this server in my browser, hit it via cURL, etc. I want to use this web server for a back-end service in an app I'm building, so I placed this URL in my app's config, and it works great. But when I manually stop and re-started my instance, I see that the public DNS changes! I've read that this happens when you explicitly stop and re-start, but doesn't happen if you just "reboot". I don't plan on explicitly stopping and re-starting this server ever, but my question is: will this public DNS ever change on its own for any reason? E.g. if the machine abnormally crashes, or whatever. In other words, is it safe to ship an app that's wired to this URL? Thanks!

    Read the article

  • Virus on site but can't find where

    - by Rob
    WARNING! THIS IS ABOUT A VIRUS ON MY SITE. IT APPEARS IT HAS BEEN THERE FOR SOMETIME AND I'VE HAD NO PROBLEMS. BUT PLEASE BE CAREFUL. READ EVERYTHING I SAY AND SEE IF YOU CAN HELP ME WITHOUT VISITING THE LINK. AVG PICKS UP ON IT AND BLOCKS IT, MCAFEE DOES NOT. Sorry about the warning, obviously i'm not here to get anyone infected or anything like that. Basically I run the website sortitoutsi dot net. Ages ago I got a virus on my computer, they got hold of my FTP passwords and added some lines of javascript to the top of my site. I removed them and believe it was fixed. However i'm using the "Web Developer" extension for Firefox and chose to view all javascript on my page and find there are various links to horrible urls such as: gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/aboutus.org/aboutus.org/google.com/skycn.com/torrents.ru.php and gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/index.php?jl= These terms do not appear anywhere. In the source code, in any of the javascript or the css. I also can't see that there are any rogue images that I don't recognise either. So i've no idea where this javascript is coming from. Can anyone suggest how I can find references to these links and remove them? I can see them both in the Web Developer firefox extension and in the net tab using Firebug. Any help would be greatly appreciated

    Read the article

  • MySQL extension of PHP not working

    - by Víctor
    In a Debian server, and after intallation and removal of SquirrelMail (with some downgrade and upgrade of php5, mysql...) the MySQL extension of PHP has stopped working. I have php5-mysql installed, and when I try to connect to a database through php-cli, i connect successfully, but when I try to connect from a web served by Apache I cannot connect. This script, run by php5-cli: echo phpinfo(); $link = mysql_connect('localhost', 'user, 'password'); if (!$link) { die('Could not connect: ' . mysql_error()); } echo 'Connected successfully'; mysql_close($link); Prints the phpinfo, which includes "/etc/php5/cli/conf.d/mysql.ini", and also the MySQL section with all the configuration: SOCKET, LIBS... And then it prints "Connectes successfully". But when run by apache accessed by web browser, it displays the phpinfo, which includes "/etc/php5/apache2/conf.d/mysql.ini", but has the MySQL section missing, and the script dies printing "Fatal error: Call to undefined function mysql_connect()". Note that both "/etc/php5/cli/conf.d/mysql.ini" and "/etc/php5/apache2/conf.d/mysql.ini" are in fact the same configuration, because I have in debian the structure: /etc/php5/apache2 /etc/php5/cgi /etc/php5/cli /etc/php5/conf.d And both point at the same directory: /etc/php5/apache2/conf.d -> ../conf.d /etc/php5/cli -> ../conf.d Where /etc/php5/conf.d/mysql.ini consists of one line: extension=mysql.so So my question is: why is the MySQL extension for PHP not working if I have the configuration included just in the same way as in php-cli, which is working? Thanks a lot!

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Can't access Port 80 from external

    - by dewacorp.alliances
    Hi there I have configuration like this: NETGEAR MODEM LINKSYS ROUTER SERVERS In the modem, I've setup as bridging and all the traffic is controlling by this ROUTER. Prior to this setup, I can access website from external (port 80) plus exchange servers (mail) and https. But now with this configuration, I can only send/receive using Exhcange servers and access OWA (Outlook web access using port 443) .... and no internal websites from outside. This is my config for LINKSYS ROUTER Application | Start | End | Protocol | IP Address Ms Exchange | 25 | 25 | Both (TCP/UDP) | 192.168.100.8 Internets | 80 | 80 | Both (TCP/UDP) | 192.168.100.11 SSL | 443 | 443 | Both (TCP/UDP) | 192.168.100.8 Exchange | 110 | 110 | Both (TCP/UDP) | 192.168.100.8 192.168.100.11 is a UBUNTU web server that running the apache which controlling the virtual name (extranet, cms, test) to redirect to the different servers. As you can see, the home internet is only allowing public IP address. Now I test this schenarion in internal network work nicely. For instance. If I type in extranet.XXX.local it goes to the right applicatios or if I try CMS.XXX.local again it goes to the right one. I also asked to ISP just in case if they are blocking the inbound port 80 for unknown reason. They said no. So I didn't understand why this happens. I suspect the configuration that I have between MODEM ROUTER but I counldn't work what it is. I don't have a documentation of previous settings and I don't know if there is a port that I need to open as well. I am appreciated your comment

    Read the article

  • Webapp in Jetty can't find properties file after running a couple days

    - by Cuga
    I have a webapp running in Jetty on Mac OS 10.6. After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file. This properties file is included inside the .war file deployed to the /webapps directory. If I restart Jetty as the superuser the web service works again just fine. Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is: Problem accessing /my-web-service. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35) at com.company.service.ServletHandler.doGet(ProxyClass.java:66) ... at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Here's where the properties files exist that it's trying to read from the .war file: And this is how the properties are being read from the classpath: Properties properties = new Properties(); properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream( "app.properties")); Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.

    Read the article

  • What kind of server attacks should i be aware of nowadays

    - by Saif Bechan
    I am recently running a web server, and there is a lot of information online, but it can all be a little confusing. I recently opened my logwatch logs and saw that i get attacked a lot by all sorts of bots. Now I am interested in a list with things I definitely should be aware of nowadays, and possible ways to prevent them. I have read stories about server crashed by floods, crashed by email, and all sorts of crazy stuff. Thing I already did: I have recently blocked all my ports, except for the http and email ports. I disabled IPv6, this was giving me a lot of named errors I have turned on spam DNS blackhole lists to fight spam - sbl.spamhaus.org; - zen.spamhaus.org; - b.barracudacentral.org; I installed and configured mod_security2 on apache There is no remote access possible to my databases That is all i did so far, further I am not aware of any other threats. I want to know if the following things have to be protects. Can I be flooded by emails. How can i prevent this Can there be a break in or flood of my databses Are there things like http floods or whatever Are there any other things i should know before i go public with my server I also want to know if there is some kind of checklist with must-have security protections. I know the OWASP list for writing good web applications, is there something for configuring a server.

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • Small store infrastructure - where to begin?

    - by KevinM1
    It looks like my older brother is about to change jobs - from lawyer to shooting range proprietor - and since I'm the family 'computer guy' I have the task of coming up with and setting up the in-store equipment. Only problem, I don't know how to start or where to look. I'm a web programmer, not an IT specialist. To that end, I figured I should ask the pros. Users: 3 (myself, my brother, and his business partner) Equipment: 1 Windows (likely 7) desktop for POS software, 1 Windows desktop/laptop for backroom use (bookkeeping, etc.) Other: ?? I'm looking for a reliable and, well, idiot-proof way to handle backups. Neither my brother nor his business partner are tech savvy (A web browser, email, MS Word and Excel are about the extent of their knowledge), so I need something they can handle. On-site would be preferable to off-site, given my brother's hesitance to have sensitive business data be handled by an outside source. I'm also looking for a small on-site server. I estimate that, at most, only 2-3 users will need access. A linux solution would keep costs down, but I'm concerned about Windows <- linux interoperability. Would the store security cameras' storage be handled by the security company, or would we have to stream that data to our own server? I know from my own experience with personal security that the company gives/loans a recording device to the home owner, but I'm not sure about business security. I know this sounds like a shopping list, and it's pretty vague. I wish I could give more detail, but between my own ignorance and things not being 100% nailed down on the business end, I'm a bit stuck. At the very least I'd like a nudge - links on a place to start, what to look for, things I need to think about, etc. - for this endeavor. Thanks.

    Read the article

< Previous Page | 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055  | Next Page >