Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 126/150 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • redirect from mysite.com to www.mysite.com

    - by jml
    hi there, i know that this has been answered many many times, so if someone wants to point me to another thread that answers my question specifically, that is fine... for right now, my searches aren't yielding many results. so i have a website like mysite.com that has a flash swf embedded in it and i go to www.mysite.com ... all of the sudden, things don't work properly. i would like to get to the bottom of this, because it's not like the page just "doesn't load" at all; it loads and i can only do certain things; as if certain functionality is disabled (might be url requests for specific urls etc). do i need to manage this in my control panel? i wouldn't assume so, because the site loads; just has a crippled functionality from within the swf. i was thinking it might have more to do with my crossdomain.xml file; could this be the case? thanks for any tips or suggestions.

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Can I edit the snapshots of websites on most visited sites page in Chrome?

    - by arik
    I saw the previous chain of discussion, but did not find a way to continue the thread - I have found the "Preference" (I have Windows 7), but in that, I did not find clearly what to modify. I did find a section called 'URLs pinned" or something like this, but it did NOT match fully the ones I have. I have activated the 'profile sync' for Chrome - don't know if it has any effect. Can you manually edit the icons of the most visited sites, for the new tab page in Chrome? When you open a new tab in Chrome, I get the 'new page' tab, where I selected the 'most popular', so I have 8 icons with the 8 most popular sites I visited; I can also pin any one of those I see on screen, such that they remain 'permanent' there. We are missing one which would point to, say, "hotmail". So, I was looking for a way to add 'hotmail' to be one of the 8. (and, by the way, we ticked the 'X' on one of those 8, and now it remains grey / shows nothing in it). So, my double-question: How can I add a URL of my choice into one of those 8 spaces? How can I restore usage of the last one?

    Read the article

  • How to add URL's to wiki (MediaWiki) powered documentation?

    - by Ian Boyd
    We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: vmrc://solo.avatopia.com:5901/Windows 2000 Server My first thought was to convert the URL into a link: [vmrc://solo.avatopia.com:5901/Windows 2000 Server] But the content renders literally as above: with the square brackets and all. Testing with other URL protocols: [http://solo.avatopia.com] [ftp://solo.avatopia.com] [ldap://solo.avatopia.com] [vmrc://solo.avatopia.com] Only the first two work, and are converted to hyperlinks. The other two remain as liternal text. How can i add URLs to MediaWiki powered documentation? Original Question We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: \\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/Windows 2000 Server If launching from a command prompt, you have to quote the spaces: C:\>"\\solo\VMRC Client\vmrc.exe" solo.avatopia.com:5901/"Windows 2000 Server" My first thought in converting the above for use on our wiki-site, is to simply HTML-ify it: file://\\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/&quot;Windows 2000 Server&quot; but MediaWiki only converts file://\solo\VMRC to a hyperlink, the remainder is text. i've tried other random things, including enclosing the URL in square brackets. What is the correct answer? i don't want to happen to randomly stumble on some format that happens to work today, and breaks in the future.

    Read the article

  • nginx not serving admin static files?

    - by toto_tico
    First, I want to clarify that this error is just for the admin static files. This means my problem is specific just to the static files that corresponds to the Django admin. The rest of the static files are working perfect. Basically my problem is that for some reason I cannot access those admin static files with the ngix server. It works perfect with the micro server of Django and the collect static is doing its job. This means it is putting the files on the expected place in the static folder. The urls are correct but I cannot even access the admin static files directly, but the others I can. So, for example, I am able to access this url (copying it in the browser): myserver.com:8080/static/css/base/base.css but i am not able to access this other url (copying it in the browser): myserver.com:8080/static/admin/css/admin.css I also tried to copy the admin/ directory structure into other_admin_directory_name/. Then I can access myserver.com:8080/static/other_admin_directory_name/css/admin.css Then, it works. So, I checked permissions and everything is fine. I tried to change ADMIN_MEDIA_PREFIX = '/static/admin/' to ADMIN_MEDIA_PREFIX = '/static/other_admin_directory_name/', it doesn't work. This a mistery in itself that I am exploring but still no luck. Finally, and it seems to be an important clue: I tried to copy the admin/ directory structure into admin_and_then_any_suffix/. Then I cannot access myserver.com:8080/static/admin_and_then_any_suffix//css/admin.css So, if the name of the directory starts with admin (for example administration or admin2) it doesn't work. * added thanks to sarnold observation ** the problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite location /static/admin { alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/; }

    Read the article

  • Is something infecting my Google searches?

    - by hippietrail
    I starting doing some experimentation toward making a browser userscript for Google searches and when opening the JavaScript console noticed something that strikes me as very fishy: The page at https://www.google.com.au/search?oq=XYZ&sourceid=chrome&ie=UTF-8&q=XYZ displayed insecure content from http://50.116.62.47/js/chromeServerV45.js. The page at about:blank displayed insecure content from http://96.126.107.154/amz/google.php?callback=a&q=XYZ&country=US. (XYZ is a placeholder for whatever the search terms really was.) Is it likely that I've picked something like a drive-by browser infection? I've tried all kinds of searches for these URLs and other keywords but I've had no luck finding anything conclusive about whether they're malicious or what they are: 50.116.62.47 chromeServerV45.js 96.126.107.154 amz/google.php The only extensions I have installed are either widely used or written by myself. But something else is strange and I'm not sure if it's just a coincidence. I updated my Windows Chrome browser today to version 23.0.1271.64 m and now my Extensions tab as well as my settings tab are blank, so I can't try disabling my extesions. Here's some discussion I've been able to find so far but not really understand and make sense of: for 96.126.107.154 : "anomalous-javascript-pt2"

    Read the article

  • nginx rewrite regex for API versioning

    - by MSpreij
    What I want is for the first to be turned into the second.. /widget => /widget/index.php /widget/ => /widget/index.php /widget?act=list => /widget/index.php?act=list /widget/?act=list => /widget/index.php?act=list /widget/list => /widget/index.php?act=list /widget/v2?act=list => /widget/v2.php?act=list /widget/v2/?act=list => /widget/v2.php?act=list /widget/v2/list => /widget/v2.php?act=list v2 could also be v45, basically "v\d+" act, "list" in this case, can have many values and more will be added. Any additional query parameters would just be passed on with $args, I guess. Basically URLs not specifying the version will go to index.php, which can then decide what specific version file to include. What I am afraid of happening is loops - this should sit in location /widget { right?. (As for putting the version of the API in the URL, I'm not trying to be RESTful, and target audience is small) Resources on how to do this entirely in index.php using "routers" also welcome :-/

    Read the article

  • Nginx try_files or else continue matching against locations?

    - by Yang
    I'm wondering whether this is possible with Nginx: I just added a directory with a bunch of HTML files (foo.html, bar.html) that I'd like to serve with /foo, /bar, etc. If the URL doesn't match up with a file name I'd like to fall back to whatever the next best matching location would be. So I have: # This block is newly added. location ~ ^/([^/]+)$ { default_type text/html; alias /blah/$1.html; } # Our long list of existing subsystems below.... location /subscribe { proxy_pass http://127.0.0.1:5000; } location /upload { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; } location ~ /(data|garbage|blargh).* { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; auth_basic text; auth_basic_user_file /etc/nginx/htpasswd; } .... The problem is that the first regex now eats up the URLs that would've gone to other locations, as per the documented behavior of location. One approach is to maintain the full explicit list of files in the first location block, but this list is quite large and is always changing. Is there a way to check to see if the file exists first, and if not, then continue with what would've been the next-best location match? I took stabs using try_files (including using a @fallback and nesting locations in there) but I don't think it's capable of doing this. However I thought I'd ask here in case I'm missing something. (Or maybe there's another better approach altogether.)

    Read the article

  • IIS 404 custom error

    - by Greg B
    I've deployed an ASP.NET 3.5 app to a 64bit Windows 2003 R2 server. In the web.config I have the following <customErrors mode="RemoteOnly" defaultRedirect="/404/"> <error statusCode="404" redirect="/404/"/> <error statusCode="500" redirect="/500/"/> </customErrors> In the website properties in IIS Manager I have set the 404 and 500 errors to Type = "URL" and the same URLs as in the web.config. I have a wildcard application map to the .NET 2.0 aspnet_isapi.dll with "Verify file exists" turned off. If I try to hit a fake .aspx file I successfully get sent to the 404 page. I belive this is because there is an explicit mapping for .aspx to the .NET DLL. If I try to access a fake directory I simply recieve a plain text response saying: The system cannot find the file specified. It would appear that these requests for directories are not being routed through the .NET pipeline, which is what I would expect (and need) to happen becuase of the wildcard application mapping. Any ideas?

    Read the article

  • Installing Silverstripe on 000webhost.com (free web host)

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it just refreshes the install.php page. It doesn't even overwrite the .htaccess file. 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how. EDIT: I managed to get to the next page but now there is another warning which is stopping it installing: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701

    Read the article

  • Make Google Chrome's address bar prefer page titles to domain names when offering completions?

    - by Ryan Thompson
    I've recently switched from Firefox to Chrome, and the thing I miss most from Firefox is the "Awesome Bar" that suggests completions for what I type primarily based on page titles, and then secondarily based on domain names. Chrome offers both matching URLs and titles, just like like Firefox, but Chrome seems to always prefer a matching domain name over a matching page title or a match to another part of the URL (besides the domain), no matter how many times I pass over the former for the latter. In fact, Chrome also prefers to suggest a search rather than matching anything other than a domain name. So is there any hidden preference I can change to tell Chrome that I care more about page titles than domain names? Example: I want to go to Google Reader, so I press Control+L and begin typing "reader". The URL for google reader is http://www.google.com/reader/view/#overview-page, so the domain name is www.google.com, which does not contain the word "reader". So the first option that Chrome suggests is either another site that has "reader" as part of the domain, or a search for "reader" with the default search engine. No matter how many times I scroll down and select Google Reader, Chrome never "learns" that that's what I want.

    Read the article

  • Apache directive for authenticated users?

    - by Alex Leach
    Using Apache 2.2, I would like to use mod_rewrite to redirect un-authenticated users to use https, if they are on http.. Is there a directive or condition one can test for whether a user is (not) authenticated? For example, I could have set up the restricted /foo location on my server:- <Location "/foo/"> Order deny,allow # Deny everyone, until authenticated... Deny from all # Authentication mechanism AuthType Basic AuthName "Members only" # AuthBasicProvider ... # ... Other authentication stuff here. # Users must be valid. Require valid-user # Logged-in users authorised to view child URLs: Satisfy any # If not SSL, respond with HTTP-redirect RewriteCond ${HTTPS} off RewriteRule /foo/?(.*)$ https://${SERVER_NAME}/foo/$2 [R=301,L] # SSL enforcement. SSLOptions FakeBasicAuth StrictRequire SSLRequireSSL SSLRequire %{SSL_CIPHER_USEKEYSIZE} >= 128 </Location> The problem here is that every file, in every subfolder, will be encrypted. This is quite unnecessary, but I see no reason to disallow it. What I would like is the RewriteRule to only be triggered during authentication. If a user is already authorised to view a folder, then I don't want the RewriteRule to be triggered. Is this possible? EDIT: I am not using any front-end HTML here. This is only using Apache's built-in directory browsing interface and its in-built authentication mechanisms. My <Directory> config is: <Directory ~ "/foo/"> Order allow,deny Allow from all AllowOverride None Options +Indexes +FollowSymLinks +Includes +MultiViews IndexOptions +FancyIndexing IndexOptions +XHTML IndexOptions NameWidth=* IndexOptions +TrackModified IndexOptions +SuppressHTMLPreamble IndexOptions +FoldersFirst IndexOptions +IgnoreCase IndexOptions Type=text/html </Directory>

    Read the article

  • Installing SilverStripe on 000webhost.com (Free web host)?

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it brings me to a page with the following errors: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how.

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • Mac "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • What's wrong with my VirtualHost?

    - by johnlai2004
    I have the following VirtualHost // filename: /etc/apache2/sites-available/ccbbbcc <VirtualHost 1.1.1.1:80> ServerAdmin [email protected] ServerName ccbbbcc.com ServerAlias www.ccbbbcc.com DocumentRoot /srv/www/ccbbbcc/production/public_html/ ErrorLog /srv/www/ccbbbcc/production/logs/error.log CustomLog /srv/www/ccbbbcc/production/logs/access.log combined </VirtualHost> And then I also have //filename: /etc/apache2/sites-available/default <VirtualHost 1.1.1.1:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> blah blah blah How come when I type into my browser http://1.1.1.1, it takes me to http://ccbbbcc.com ? Even when I point new urls to the IP 1.1.1.1, webpages serve from http://ccbbbcc.com. Why is the ccbbbcc.com overriding all my other virtual hosts? Why am I unable to serve pages from /var/www directory? I've made sure to use a2ensite and to restart apache. This is what my /etc/apache2/ports.conf looks like NameVirtualHost 1.1.1.1:80 Listen 80 Listen 443

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • IIS7 - how to place application in a folder inside application web site

    - by Nir
    I have a static web site with a blog (an asp.net application), the blog is in a subdirectory of the web site so: example.com/, example.com/Something.htm, example.com/folder/somefile.htm, etc. - are all static files example.com/blog, example.com/blog/categories.aspx, example.com/blog/2011/11/09/post-name.aspx, etc. - all go to the blog app I'm upgrading the static part of the web site to a dynamic site (also an asp.net application) and the blog is incompatible with the new app (the app needs handlers and modules loaded in web.config that don't work with the blog) Also, I have to keep all the old URLs the same - so I can't move the blog to a subdomain or the new app to a folder and the blog generates links based on its folder so clever redirection tricks wouldn't work. Is there a way to place an asp.net application in a folder inside another application (either as a real or virtual folder) so that the root web.config settings don't apply to the application folder? Or some other trick I didn't think of? The system is running IIS7 on Windows Server 2008 64bit, I have full control over the server's configuration. I can't modify the blog's source code but I can edit its web.config and other configuration. I can modify the source of the new application but I can't make it compatible with the blog (most of its usefulness comes from a 3rd party library that is not compatible with the blog). The blog in an asp.net 3.5 webforms application The new root application is an asp.net 4.0 mvc application

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >