Search Results

Search found 33834 results on 1354 pages for 'site column'.

Page 322/1354 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Rack processes taking over CPU under Passenger

    - by pjmorse
    I have a Spree site running the following stack: Nginx 1.0.8 Passenger 3.0.9 Ruby 1.9.2-p290 Rack 1.3.6 Rails 3.1.4 Spree 0.70.5 I recently upgraded from Spree 0.70.3, which also brought a Deface upgrade from 0.7.x to 0.8.0. Since then things have been very unstable. Recently we've seen some CPU-hogging processes which drive load up on the server and grind the whole thing to a stop. They're Rack processes and it looks like Passenger is starting them; they're owned by the site-runner user, an unprivileged user who owns the application code. (Passenger automatically runs the site code as the user who owns it.) If I restart Nginx and kill the runaway processes, it helps for a while, but eventually similar processes return and bog things down again. How can I figure out what's starting these processes, what they're trying to do, and how to stop them?

    Read the article

  • Recoomend company to take care or webserver and wordpress management?

    - by javipas
    I'm interested in setting up a professional WordPress site but I'd like to explore the pssibilities to leave the management of the webserver and even WordPress' management to a company that guarantees great availability, performance of the site (load times, security) and even SEO. My site is currently running on other platform but I plan on a migration on the next 4 weeks. I've done this usually, but I'd like to focus on the content, so I don't have to mess with webserver/mysql/php configs in order to get nice performance. Is there some (maybe hosting) company that is dedicated to this? Would it be better to hire a sysadmin with experience in those matters?

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • AdSense (reports) and custom channels

    - by RobbertT
    Please help me to further understand custom channels. As Google says it is a way to map your ads, but I still have a few questions: Is it correct that a single custom channel per 1 ad is not very useful, since you can specify Ad blocks in the AdSense reports? I have multiple Ads in multiple custom channels. After this I created 1 custom channel and added all the ads to it. I made this channel targetable, so people can target through this channel on all ads at once. Is this a good way to do it? In other words, is it possible to have ads in multiple custom channels (without targeting, just for analyzing) and then create 1 custom channel with targeting that embraces all the (desired) ads? Why is it not possible for me to analyze custom channels (or ad blocks & formats) per site in the Adsense (reports). Or am I doing something wrong? If not, I have to create different custom channels per site to see how certain ads are doing on a site level?

    Read the article

  • Which programming language should I choose I want to build this website ...? [closed]

    - by Goma
    Assuming that I will start with just phot sharing website. Every user can add comments to any photo. After that the site will contain news (general news), the admin can add any news and the moderators as well while the users can also add comments on this news. The website will aslo provide photos uploader, so every user will have up to 20 MB ti upload any photos they want. Other users can see these photos or can not depending on the option that the main user chose(if he wants to publish his photos or not). The site should have a small type of forum which provide the ability for admin to ad categories and for user to add topics and replies for each topic in these categoris. These are the things that I can think of now, but the website will add other features as well and services later on. Can you tell me now which programming language can help me to do all that? I need a programming language that provdies the follwing: 1- speed load for pages of the site. 2- easy to add more functions quickly and easy to edit code for any reason. 3- Secure 4- fast in displaying infromation from database.

    Read the article

  • design question for transportation agency/workflow system

    - by George2
    I am designing a transportation agency/workflow system, and it including 3 types of people, customer who requests to transport some stuff, drivers who deliver the stuff, and truck manager who manages transport source/destination truck coordination and communicates/organizes drivers. The system is expected to be a web site, and 3 kinds of people could use the web site to submit request, accept request, monitor status of specific stuff transportation, etc. The web site is more like an open agency or a workflow system. I am wondering whether there are any existing technologies, tools or projects (better to be open source, but not a must) which I could build my application faster based on? I prefer to use .Net technologies, but not a must. Thanks in advance!

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • SharePoint doesn't support this authentication scheme.

    - by EtherDragon
    I have a new Windows Phone 7 phone, and I'm trying to investigate how to connect the Office application to our SharePoint site(s). In the Office application, on Phone 7, I flip to the SharePoint page. I go to open URL, and enter the url for one of my sites, that uses default authentication (Windows Auth). I get a message: Can't open SharePoint doesn't support this authentication scheme. For assistance, contact the person who manages thus SharePoint site (That would be me). You can try opening the content in your web browser instead. When opening in my browser, I can access the content without any problem. (Windows Auth passes) Anyone have any source material on what I should do to my SharePoint site to "support this authentication scheme?" Note: I am the administrator of our SharePoint server farm(s).

    Read the article

  • Run single php code using multiple domains

    - by Acharya
    Hi all, I have a php code/site at xyz.com. Now I want to run the same site using multiple domains means when somebody open domain1.com, domain2.com ,domain4.com, so on urls, it should run the code/side that is at xyz.com I know one way to do this. I can host all these domains to the server where xyz.com is hosted so all domains will point to same peace of code/site. n above solution i need to hosted the domains manually. Is there any other way to do this as I want to add domains dynamically? Thanks in advance!

    Read the article

  • Pages partially load on rapid refresh

    - by user101570
    I recently set up a VPS slice with 256MB to run a LAMP stack (Ubuntu 11.04, Apache2, Mysql, PHP5). So far I'm only running a simple Wordpress site on an IP-based virtual host I set up. The performance is excellent, but I've noticed that if I send multiple HTTP requests from the same IP in a short time period, only partial pages are rendered. Then if I wait a bit and refresh the page, the entire page loads again. I noticed this behaviour when accessing the site from two browsers from my office desktop, but it also presents itself if I quickly navigate the site from a single browser (any browser). I'm guessing this is an Apache phenomenon, as the pages are rendered correctly except under the conditions above, but perhaps I'm wrong here. Could it be my hosting company with some kind of DOS protection in place? As a relative Linux/server noob, I'd really appreciate any insight into what settings in Apache could explain this behaviour, and how I might go about changing it.

    Read the article

  • Summary of usage policies for website integration of various social media networks?

    - by Dallas
    To cut to the chase... I look at Twitter's usage policy and see limitations on what can and can't be done with their logo. I also see examples of websites that use icons that have been integrated with the look and feel of their own site. Given Twitter's policy, for example, it would appear that legal conversations/agreements would need to take place to do this, especially on a commercial site. I believe it is perfectly acceptable to have a plain text button that simply has the word "Tweet" on it, that has the same functionality. My question is if anyone can provide online (or other) references that attempt to summarize what can and can't be done when integrating various social networks into your own work? The answer I will mark as the correct one will be the one which provides the best resource(s) giving the best summaries of what can and can't be done with specific logos/icons, with a secondary factor being that a variety of social networking sites are addressed in your answer. Before people point to specific questions, I am looking for a well-rounded approach that considers a breadth of networks and considerations. Background: I would like to incorporate social media icons and functionality, but would like to consider what type of modifications can be done without needing to involve lawyers. For example, can I bring in a standard Facebook logo, but incorporate my site color into the logo? Would the answer differ if I maintained their color, but add in a few pixels of another color to transition? I am not saying I want to do this, but rather using it as an example.

    Read the article

  • Have main website hosted on 3rd party while keeping Google Sites for Users

    - by vinnybozz
    Hi, I want a third party hosting my main site with PHP, MySQL, etc... I don't know which DNS records to modify. Is it possible to have the following mappings: www.example.com = 3rd party hosting blog.example.com = other 3rd party hosting mail.example.com = Google Mail docs.example.com = Google Docs sites.example.com = Google Sites sites.example.com/internal-site = Google Sites internal site ... Right now in TotalDNS, I have www = ghs.google.com. If I modify only this record to point to the IP provided by the 3rd party hosting, is it going to work ? Do I also need to add NameServers, remove the ones Google added ? Thx for the help

    Read the article

  • IIS and PHP restrict IO permissions

    - by ULTRA_POROV
    I have php installed trough a fastCGI module. Is there a way to restrict the module (php.exe) read / write permissions to only the directory (+ subdirs) of the IIS site that is calling it? I need this to prevent one IIS PHP site from having access to files outside its own directory. How to do this? Is there a setting in php.ini or in the IIS configuration? I believe such a feature could exist, because when a file on the server is requested the root path of the site is also known, all it would take is that IIS passes this path to the php module, and the php module should on its end allow only IO operations within this path. PS: I know it is possible to achieve this by using a different windows account for each website, this is not an option.

    Read the article

  • How to make a table structure for products to be available for both wholesale and retail?

    - by kmy
    Ignoring the different column details like colors, shapes, and sizes, I already have an idea of dealing with that, I'm more interested in dealing with retail pricing, whole pricing (with or without a table), and possibly discount. If anything, I want to know what I should also take account for. Will it just be adding a quantity column, two pricing cols, and two discount columns, and a quantity limit? If I were to add a pricing table it would be inserted as a json file, would this be a bad idea? Please give any insight about this.

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • Non-dynamic CMS [closed]

    - by user20457
    Some of the web sites I visit every day (news, sports, etc..), although the content changes very often (several times per day), the URLs always have .html extension, what makes me thing that the content has been generated once, and then published as a static page, rather than generated in every call, or even cached in memory. For example, the fictitious site "mysports.com" have a "futbol.html" page, and then yesterday Messi gets injured and they have another thing to put in that page, then I presume they post the new item in their CMS system, and automatically a publishing action is triggered aftewards that recreates "futbol.html" in a CDN with the new item and probably discard the oldest one. Then the ETag changes and clients will get the new page if they try to access it. (the site is fictitious but this is what I believe happened yesterday in the sports site I read) This would fit in the CQRS approach, and I presume they have a huge performance. I know lots of CMS (WP, Drupal, BlogEngine.net, DNN, etc...), but I have never seen any able of doing this, or at least, I was not aware this feautre. How are called those distributed CMS? Which are the most well known? Cheers.

    Read the article

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • Restart single uWSGI application (when it's in emperor mode)

    - by Oli
    I'm running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file. But occasionally I'll mess something up in the Django site and the server won't load. Yeah, yeah, I should be testing better but that's not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site's uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds). Is there a way to force uWSGI to just reload one of its sites?

    Read the article

  • .htaccess redirect not preserving http_referer header

    - by CodeToaster
    We're merging with another company, and we want to redirect content from their (Apache) website to our (IIS) site. When traffic arrives at our site, we inspect the HTTP_REFERER, and if the visitor was just redirected from the company's site that we just merged with, they'll be presented with a "splash" page announcing the merger. I've added the line... Redirect / http://www.oursite.com/ ...to their .htaccess, which works fine, except that when the browser is redirected it doesn't send the HTTP_REFERER header. I've tried redirecting with redirect codes 301, 302 and 307 (the default, I believe, is 302) and all have the same effect (redirects fine, but no HTTP_REFERER). Can anyone provide some insight into why HTTP_REFERER wouldn't be included?

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • Combine several locations with regex in nginx

    - by AlexAtNet
    I dynamic number of Joomla installations in subfolders of the domain. For example: http://site/joomla_1/ http://site/joomla_2/ http://site/joomla_3/ ... Currently I have the follwing config that works: index index.php; location / { index index.php index.html index.htm; } location /joomla_1/ { try_files $uri $uri/ /joomla_1/index.php?q=$uri&$args; } location /joomla_2/ { try_files $uri $uri/ /joomla_2/index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/joomla.sock; ... } I'm trying to combine joomla_N rules in one: location ~ ^/(joomla_[^/]+)/ { try_files $uri $uri/ /$1/index.php?q=$uri&$args; } but server starts to return index.php as is (does not call the php-fpm). It looks like the nginx stops the processing of the regex rules after the first match. Is there any way to combine this rules with something like regex?

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • ftp connects but files aren't visible browsing

    - by YsoL8
    Hello If this should be on that other site, please don't shoot me, as I can't remember the name or the url. I have an ftp account in Dreamweaver that connects to the remote site and appears to be uploading files as normal. But when I browse to the location I can't see any new files or changes to the index page. (I've uploaded index.php and connect.php). I'm getting a 404 page. I suspect the host directory is wrong, but looking at the file tree, I can't see the folder I'm supposed to be using, so I'm uploading to the apparent site root. Any guidance on this?

    Read the article

  • Hide directory contents from showing when accessing the URL directly

    - by SoLoGHoST
    On my site, if you browse to http://example.com/images/ the contents of the entire directory are shown like so: How can I make it so that this doesn't show up when people browse directly to http://example.com/images/? Can I create an .htaccess file in that directory? Or is there a better way? I really don't want people being able to do this for the entire site (i.e. every directory on that site). What can I do to prevent this? I figure it's either something that has to be done in Apache or using an global .htaccess file and placing it in the public_html folder perhaps? EDIT I diverted this using an index.php file, but I still feel that security is an issue here, how can I fix this permanently?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >