Search Results

Search found 21481 results on 860 pages for 'www yegorov p ru'.

Page 204/860 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • FTP gives me a error when uploading and deleting files [on hold]

    - by AR Games
    Here's the error I get when trying to delete files... Command: DELE index.html Response: 550 Delete operation failed. Here's the error I get when trying to upload files... Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Starting upload of C:\wamp\www\.DS_Store Command: CWD /var/www/html Response: 250 Directory successfully changed. Command: TYPE A Response: 200 Switching to ASCII mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,78,222). Command: STOR .DS_Store Response: 553 Could not create file. Error: Critical file transfer error Status: Retrieving directory listing... Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,23,94). Command: LIST Response: 150 Here comes the directory listing. Response: 226 Directory send OK. Status: Directory listing successful Response: 421 Timeout. Error: Connection closed by server Status: Disconnected from server IM running windows OS and using filezilla FTP client

    Read the article

  • How can I configure Adobe Help so it doesn't chatter so much with Adobe's domain?

    - by Michael Prescott
    Adobe Help that came with Creative Suite 5 and/or Flash Builder Pro is constantly creating network traffic with an Adobe site, www.wip4.adobe.com In the Adobe Help application Preferences, I find that I can change the settings so that I must manually download updates, but apparently the application still likes to call home and chatter non-stop with www.wip4.adobe.com. I could use something like Little Snitch to block all this spyware-like behavior, but I'd really prefer to just change the application's behavior. Is there a hidden setting or configuration file to adjust this behavior to something more appropriate and polite?

    Read the article

  • How can transfer zabbix item from different hosts and save item statistic

    - by Stepchik
    There are two server's (srv1 and srv2). Mysql server has been installed on which of them. Srv1 mysql contains database (db1). Zabbix-server get statistic throw configured agent user parameter (https://www.zabbix.com/documentation/2.0/manual/config/items/userparameters). Yesterday i has been copyed database db1 from mysql srv1 to mysql srv2. I can clone zabbix server item (https://www.zabbix.com/documentation/2.0/manual/config/items) to srv2, but lost all srv1 db1 statistic. Can you advice how keep them?

    Read the article

  • Apache: How can I make my localhost on 192.168.1.101 visible from 192.168.1.102?

    - by takpar
    Hi, I've setup a Apache web server on Ubuntu Linux. I can see http://localhost well. But I can't see localhost from other machines in my network using IP address: http://192.168.1.101 I added the lines below to my apache conf: `Allow from 192.168.1` but it did not work. It says "the connection has timed out". what should i do? PS: adp@adp-desktop:~$ sudo netstat -ap | grep apache tcp 0 0 *:www *:* LISTEN 10581/apache2 tcp 0 0 localhost:www localhost:46017 ESTABLISHED 10586/apache2

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • Is there an easy way to always redirect facebook.com to facebook.com/events/list (Firefox, Osx)

    - by Tor Thommesen
    I often use facebook for communication while working. When I do, I'd rather not see the facebook newsfeed. Unfortunately it's hard to remember to not go to the frontpage and use facebook.com/messages or facebook.com/events. Is there any way to always redirect from the url https://www.facebook.com/ to https://www.facebook.com/events/list? I use firefox on osx. I have tried the redirector addon and I'm not able to make it work, not sure if I'm missing something obvious or if it's outdated/buggy.

    Read the article

  • Intermediate SSL Certificates on Azure Websites

    - by amhed
    I have successfully configured an Extended-Validation Certificate on an Azure Website following this article: http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure-ssl-certificate/ The main (non-technical) stakeholder of the web application went through great lengths to validate that our site is secure. He went to this site to check the validity of our SSL: http://www.whynopadlock.com/ The site throw the following error: `SSL verification issue (Possibly mis-matched URL or bad intermediate cert.). Details: ERROR: no certificate subject alternative name matches`` The certificate is installed using IP Based SSL instead of SNI. This is done this way because some site visitors still use Internet Explorer 8 on Windows XP, which has no support for SNI and throws a security warning. Is my certificate correclty installed? I received three .CRT files from my SSL provider: PrimaryIntermediate.crt SecondaryIntermediate.crt EndCertificate.crt This is how I exported our certificate as a .PFX file to Azure: openssl pkcs12 -export -out myserver.pfx -inkey myserver.key -in myserver.crt

    Read the article

  • Updated Virtual Machine for VS/TFS 2010

    - by Enrique Lima
    If you had downloaded the previous version of the virtual machines, then you are likely aware they are set to expire soon (12/15/2010). Brian Keller announced yesterday (blog post here) the availability of a vm refresh (new expiration set for 6/1/2011). What is part of the refresh? Here is the excerpt from Brian’s post: “ The version of this virtual machine which was refreshed on December 9, 2010, includes the following additions: · Visual Studio 2010 Feature Pack 2 · Team Foundation Server 2010 Power Tools (September 2010 Release) · Visual Studio 2010 Productivity Power Tools (these are disabled in VS so that the screenshots of the hands-on-labs still match; you can quickly enable the Productivity Power Tools via Tools -> Extension Manager from within Visual Studio) · Test Scribe for Microsoft Test Manager · Visual Studio Scrum 1.0 Process Template · All Windows Updates through December 8, 2010 · Lab Management GDR (KB983578) · Visual Studio 2010 Feature Pack 2 pre-requisite hotfix (KB2403277) · Microsoft Test Manager hotfix (KB2387011) · Minor fit-and-finish fixes based on customer feedback · A new expiration date of June 1, 2011” The links to download the Virtual Machines are: Hyper-V: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc&displaylang=en Windows Virtual PC (Win 7): http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e&displaylang=en

    Read the article

  • Sharing wireless internet connection between 2 Ubuntu 10.04 systems using cross-over cable

    - by Gary
    I have a Lenovo T60 with Ubuntu 10.04 connected with a cross-over cable to a Dell Vostro 1400, also running Ubuntu 10.04. My internet is coming into the Lenovo through an external wireless antennae, and I want to share the internet with the Dell. On the Lenovo: eth0 connection has IPv4 Settings 'Shared to other computers' I can ping the Dell (10.42.43.10) successfully I can use mtr to trace to www.google.com successfully On the Dell: eth0 connection has IPv4 Settings 'Automatic DHCP' I can ping the Lenovo (10.42.43.1) successfully when I use mtr to trace to www.google.com, I can only reach 10.42.43.1 I must be missing some setting, but cannot see what it is; can anyone help me?

    Read the article

  • PHP5 giving failed to open stream: HTTP request failed error when using fopen.

    - by mickey
    Hello everyone. This problem seems to have been discussed in the past everywhere on google and here, but I have yet to find a solution. A very simple fopen gives me a PHP Warning: fopen(http://www.google.ca): failed to open stream: HTTP request failed!". The URL I am fetching have no importance because even when I fetch http://www.google.com it doesnt work. The exact same script works on different server. The one failing is Ubuntu 10.04 and PHP 5.3.2. This is not a problem in my script, it's something different in my server or it might be a bug in PHP. I have tried using a user_agent in php.ini but no success. My allow_url_fopen is set to On. If you have any ideas, feel free!

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • transfer code from one server to other server.

    - by Kamlesh Bhure
    I wanted to transfer new code into my new production server. I have code files on my development server. Instead of uploading files using FTP from my local machine, there is other way to transfer code from one server to other. What I am thinking I will make zip file of whole code to be transfer and place it in webroot. So that it would be accessible in internet with some link http://www.mydomain.com/code.tar.gz now on the other server i will just run command wget http://www.mydomain.com/code.tar.gz Will this transfer done in few seconds...? May I know is this correct approach?

    Read the article

  • Playing Around with WebLogic Maven Plug-In by Flavius Sana

    - by JuergenKress
    Packaged with WebLogic 12c wls-maven-plugin let’s you install, start and stop servers, create domain, execute WLST scripts, compile and deploy applications. The plug-in works with Maven 2.x and 3.x. WebLogic 12c can be downloaded from http://www.oracle.com/technetwork/index.html. Developers version has around  180MB (zip archive).  To install the plugin we need first to extract wls-maven-plugin.jar.pack and pom.xml $unzip ~/Downloads/wls1212_dev.zip wls12120/wlserver/server/lib/wls-maven-plugin.jar.pack $unzip ~/Downloads/wls1212_dev.zip wls12120/wlserver/server/lib/pom.xml Now, let’s unpack the jar file: $unpack200 -r wls12120/wlserver/server/lib/wls-maven-plugin.jar.pack wls12120/wlserver/server/lib/wls-maven-plugin.jar Install the plug-in in the local repository. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Maven,Flavius Sana,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Apache https configurations

    - by sissonb
    I am trying to setup my domain name with a self signed cert. I created the cert and placed the server.key and server.crt files into C:/apache/config/ Then I updated my httpd.confg host to include the following, <VirtualHost 192.168.5.250:443> DocumentRoot C:/www ServerName mydomain.com:443 ServerAlias www.mydomain.com:443 SSLEngine on SSLCertificateFile C:/apache/conf/server.crt SSLCertificateKeyFile C:/apache/conf/server.key SSLVerifyClient none SSLProxyEngine off SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> Now when I go to https://mydomain.com I get the following error. SSL connection error Unable to make a secure connection to the server. This may be a problem with the server, or it may be requiring a client authentication certificate that you don't have. Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. Can anyone see what I'm doing wrong? Thanks!

    Read the article

  • Inbound connections using Internet Connection Sharing in Apple/Mac/Leopard

    - by tlianza
    I have a Mac mini which I'm using to give some other devices wireless access, by sharing it's Airport connection with the local ethernet, and that is plugged into a switch. All devices can get online no problem. (See how: http://www.macosxhints.com/article.php?story=20041112101646643 and http://www.macosxhints.com/article.php?story=20071223001432304 ) The issue is that I need to be able to connect in to these machines as well (at least, for the Slingbox to work). All the devices have 192.168.2.* addresses, and the rest of my local network is on 192.168.1.*. I tried setting a static route so that the 192.168.2.* addresses would use a gateway of 192.168.1.50 (my mac mini's address) but that didn't seem to help. Does anyone know if what I'm trying to do is possible? I admit I'm not certain what Internet Connection sharing is really doing under the hood... perhaps it just does basic nat, and doesn't do the type of routing I'm looking for. If so, anyone know if this is possible?

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Taking over and Moving a PHP site

    - by KCavon
    I have a internal use PHP site at my new position. It only runs a few days a year off site so we keep it on laptops. The hardware it has been on, a 8 year IBM Thinkpad running Fedora, is dying. I have new Lenovo Thinkpad's running latest and greatest Ubuntu. I have copied the contents of var to a shared drive, renamed the old www folder in var on the new machine and copied over the old www folder. I can get to the login page and into the site, but when I look something up it returns Cannot Open. I know I cannot get to the MySQL in the new machine because users and passwords dont match. The version of the PHP from the old machine is before the setup script was included. I know very little about PHP. I am looking for input on the proper way to link the old PHP files to my mysql instance. Any help, much appreciated.

    Read the article

  • How to solve virtual host issue

    - by Webnet
    I have multiple sites all setup the same as below except "bk" has something else in it's place... NameVirtualHost *:80 <VirtualHost bk:80> ServerName bk DocumentRoot /var/www/bk.com/ </VirtualHost> and I get these errors when restarting apache: [Mon Jan 17 10:28:56 2011] [error] VirtualHost bk:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jan 17 10:28:56 2011] [warn] NameVirtualHost bk:80 has no VirtualHosts I don't get it... the other 2 sites I have virtual host configurations for this exact same way don't throw any errors update One error message fixed - here's where I'm at now.. <VirtualHost bk:80> ServerName bk DocumentRoot /var/www/bk.com/ </VirtualHost> [Mon Jan 17 10:28:56 2011] [error] VirtualHost bk:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • ISP issue browsing "sonos.com" - need to diagnose and prove [closed]

    - by john
    I am unable to browse to a website "sonos.com" with my ISP (virgin). I have ruled out browsers, PCs, macs, routers, wifi, etc. Other ISPs (even other virgin connections in different areas!) supply this site no problem. I am 99% convinced there is a DNS issue lurking here. There is something fishy about the DNS for the site : What I notice is that online DNS sites tell me the right IP address for "sonos.com", but not for "www.sonos.com". Anyway when I type "sonos.com" the browser (any/all of the 4 I tried) fail to display the page. Firefox gives a "connection was reset" error. If I browse to sonos.com using the IP address it works OK. Browsing to www.sonos.com or sonos.com works fine with other ISPs of course. Questions: 1 Does anyone have any idea what might be going on here? 2 Any suggestions as to tools/monitors to help investigate/prove what is going on I can then take this up with virgin and/or sonos. Thanks

    Read the article

  • USB forwarding from dom0 to domU

    - by Karolis T.
    What are my options to forward two USB connected phones to xen guest? I've read about PCI-passthrough http://www.wlug.org.nz/XenPciPassthrough, but I'm sure usb controller in the server isn't a pci card. There's device level forwarding, but I need to forward two devices, this here doesn't say how to do it: http://www.olivetalks.com/2008/02/03/usb-forwarding-on-xen-it-just-does-not-work/ Would something as simple as: usbdevice = [ 'host:xxx', 'host:yyy', ] work? EDIT: I'm now starting a bounty. This is really important for me and for other people also, hoping someone who have this resolved will be able to help.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >