Search Results

Search found 24209 results on 969 pages for 'site'.

Page 193/969 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • How do I keep controversy in check?

    - by Aaron Digulla
    This is probably OT but it's less OT here than on any other SO site, so please bear with me. I'm working on a new project votEm. The goal is to give independent candidates a platform to introduce themselves to get elected for a political office. My main reason is that today, it's too expensive to run for an office. Some politicians in the US spend as much as 30 million dollars (!) for a single campaign. That money is better spent elsewhere. In a similar fashion, people who want to change countries like Egypt, could use such a platform to present themselves. Now I expect a lot of emotions and pressure on my site. People with a lot of money (and a lot to lose) will try to game it (political parties, secret services of ... errr ... "not 100% democratic countries", big companies, ...) To avoid as many mistakes as possible, I need a list of resources, ideas and tips how to keep such a site out of too much trouble. PS: I'd make this CW but the option seems to be gone...

    Read the article

  • HTG Explains: What Is RSS and How Can I Benefit From Using It?

    - by Jason Fitzpatrick
    If you’re trying to keep up with news and content on multiple web sites, you’re faced with the never ending task of visiting those sites to check for new content. Read on to learn about RSS and how it can deliver the content right to your digital doorstep. In many ways, content on the internet is beautifully linked together and accessible, but despite the interconnectivity of it all we still frequently find ourselves visiting this site, then that site, then another site, all in an effort to check for updates and get the content we want. That’s not particular efficient and there’s a much better way to go about it. Imagine if you will a simple hypothetical situation. You’re a fan of a web comic, a few tech sites, an infrequently updated but excellent blog about an obscure music genre you’re a fan of, and you like to keep an eye on announcements from your favorite video game vendor. If you rely on manually visiting all those sites—and, let’s be honest, our hypothetical example has a scant half-dozen sites while the average person would have many, many, more—then you’re either going to be wasting a lot of time checking the sites every day for new content or you’re going to be missing out on content as you either forget to visit the sites or find the content after it’s not as useful or relevant to you. RSS can break you free from that cycle of either over-checking or under-finding content by delivering the content to you as it is published. Let’s take a look at what RSS is how it can help. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • BGP Router reccomendations for simple redundancy [closed]

    - by Jona
    We have two sites that each have an internet connection and have a dedicated dark fibre between them. Each site has it's own IP space and we have an AS number. We're looking to be resilient to failure of the internet connection to either site and so need to buy a pair of approriate routers. Requirements are: Able to run 2 bgp sessions (one with the ISP, one with the other site router) Option to take a full table from the upstream ISPs would be nice. Able to provide HA gateways on the LAN side (e.g. 192.168.0.254 will automatically migrate if it's host router lost power) A dedicated device rather than a server running Linux / BSD Not crazy expensive. Any help / advice much appreciated.

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    Hi Serverfault, I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • Apache htaccess with mod_expires Not Working for certain directories

    - by keyboarddrummer
    I have a Joomla site that I am trying to enable caching using mod_expires. I have the .htaccess in the root of the site and have added the options as found on the page http://www.pactsoftware.nl/tools/joomla-optimization.html Using the PageSpeed extension in Chrome, prior to adding this in my .htaccess, my site scores a 55 (Caching is at the top, and lists a lot of images, CSS, and JS files). After these directives, it scores 70, with caching in the yellow, but still lists some image files (some are two directories deep and the rest are four). I checked for any other .htaccess files in the Joomla root, but none are between those folders and the root. It is almost as if htaccess only works in that one directory, not the subfolders. I have tried putting a .htaccess in each affected subdirectory, but it does not work. Does anyone have any ideas?

    Read the article

  • Changing domain name - what are the practical steps involved

    - by Homunculus Reticulli
    I launched a website a couple of years ago, bright eyed and bushy tailed, with dreams of conquering the world. Unfortunately it wasn't to be. Now, that I am a bit older and wiser, I have spent some money on branding and creating more quality content etc, I am rebranding and relaunching the site with a new domain name. Although the traffic on the old site is laughable (i.e. non-existent), there are a few pages of good information on there and I don't want to lose any "juice" those pages may have gained because web crawlers have been seeing it for a few years now. Ok, the upshot of all that is this: I want to change my domain name from xyz.com to abc.com. I am maintaining the same friendly urls I had before, only the domanin name part of the url will change, so that any traffic coming to the old page will be forwarded/redirected? to the new page seamlessly. How do I go about achieving this (i.e. what are the steps I need to carry out, and to minimize any "disruption" to any credibility the existing site has with Googlebot etc? I am running Apache 2.x on a headless Linux (Ubuntu) server.

    Read the article

  • Is SCCM overkill for medium-sized organizations?

    - by Le_Quack
    I am an IT technician in a high school with around 1600 students 250 staff and 800+ client computers mostly running Windows 7. Our team is composed of three members. My boss seems content with a network that works (just about) not necessarily a productive well maintained network that is easy to run and maintain. I'm still fairly early on in my I.T. career so I'm not up to speed on all the different endpoint management solutions that are available. I'm looking for a better way to manage clients (deploy software, track changes, inventory etc) I like the look of SCCM 2012's features but the case studies seem to be aimed at large multi-site infrastructural rather than a single mid sized site. Is SCCM suitable for a mid sized single site or is it aimed at much larger corporations? How can I determine whether or not an endpoint management solution like SCCM is a good fit for our organization? EDIT: Thanks for all the help I'll take a look at SCE and SCCM and get some proposals drawn up to take to my boss/deputy head

    Read the article

  • Any problems with using a 301 redirect to force https traffic in IIS?

    - by Jess
    Is there any problem with using a 301 redirect to force all traffic to go to a secure-only site? We originally had redirect rules, but enforcing SSL-only seemed more secure. Here is how we set it up: Site 1: https://example.com/ Require SSL set Bound to 443 only Site 2: http://example.com Bound to 80 only Empty folder - no actual html or other data 301 Redirects to https://example.com This seems to work beautifully, but are there any issues with doing this? Would any browsers not recognize the 301 redirect, or could there be security warnings during the redirect?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • how to have publishing, blog and wiki features together?

    - by George2
    Hello everyone, I am using SharePoint 2007 Enterprise + Publishing portal template + Windows Server 2008. I want to have blog and wiki features as well as publishing portal features. Any ideas how to integrate publishing portal, blog and wiki? For integrate, I mean using the same user name and password to pass through authentication of publishing portal, blog and wiki. And should I setup 3 different site collections for publishing portal, blog and wiki (I find if I setup publishing portal site collection, I can not create blog and wiki sub-site)? thanks in advance, George

    Read the article

  • Multiple URL's going to same page - Kosher for Google?

    - by Ashoka15
    I hear conflicting answers from people about this, and I'm a developer by trade, and my SEO knowledge is not what it should be. Here's my situation: I run a website that lists hotels, restaurants, bars, shops, etc for a small Asian beach town. Lots of establishments here are hotels with a restaurant and bar, as well as restaurants that are also bars. As en example, a Mexican restaurant that also functions as a full cocktail bar. I first set it up so each establishment has one page, but can create multiple pages based on their other areas of business. This forces people to create TWO listings under the same name, and most just add the exact same information onto each page, making things redundant. I am re-arranging the database so that a establishment has only ONE listing (one unique page referenced by the unique code '12345ABCDEF') that is accessible from browsing under "Restaurants" and "Bars", and has the URL structures: site.com/dining/mexican/12345ABCDEF/business-name.html site.com/bars/cocktail_bars/12345ABCDEF/business-name.html I could easily simplify the URL to just the unique code and name: site.com/12345ABCDEF/business-name.html But, I found that Google has parsed by URL structure and lists like this on their SERP: Home > Dining > Mexican With each pointing to the default page for homepage, restaurants and Mexican restaurants. If I simplify the URL structure, will I lose these associations? Could Google also be picking up this structure from my breadcrumb trail at the top of the page? What is the best way to set up URL's on these pages so I am not penalized by Google for having identical information on two URL's, while still being able to have places show up as they did with the old system?

    Read the article

  • I am the one who needs 5000 cd's!!!!!!

    - by cabey
    I didn't realise how world wide the users of the site are. I am based in England and will be helping out at an International Camp for young people in Finland this summer. I will be in charge off an game where we will have 1500 young people searcing for these CD's that will be hidden all over the camp site. They will have to find them and bringing them back to base one at a time. The young people will be divided into 5 teams and the team that brings back the most gets a prize. Hope this helps and allows me to put the reequest back on the site. I have tried to source them in Finland but have had no success.

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

  • Geekswithblogs.net | Screen Resolutions of our Readers

    - by Jeff Julian
    Yesterday I talked about the Browsers we see being used by our readers driven off of our Google Analytics traffic and today I want to share with you the Screen Resolutions we see.  As a web developer most of my life, it is hard to decide how large you should build your application because typically you have a couple huge high resolution monitors on your desk, but you typical end user is thought to have 1024x768.  With HTML5/CSS3 out, it is a little better coming up with a design that will scale to all resolutions, but it is still nice to know the numbers when it comes to how much real estate do I have on my clients. If you look at these numbers for Geekswithblogs.net, we have a lot of high resolution monitors from users that visit the site.  After a little more investigation of the number you will notice we do not have as much height available as we do width.  If the primary goal of a site is to deliver as much data in the viewable area without scrolling, this becomes a challenge when most of our pages have long pieces of formatted data.  So our challenge is to build skins that use up more of the sides of the content toward the top on larger resolution browsers and then entice the reader to scroll to get the goodies embedded in the content of the posts.  Going to be an interesting battle for sure, but we really need more skin offerings on the site. Technorati Tags: Resolution Statistics,Geekswithblogs.net

    Read the article

  • Chrome - Why am I automatically authenticated to a web app even after clearing browser cookies?

    - by Howiecamp
    I am accessing a web application using Chrome. If I sign out of the app and clear all Chrome history/cookies/etc (even Flash cookies which are now handled by Chrome in the same Clear History area) and then re-access the site, I am automatically logged in without being prompted for credentials. I then launched Chrome in Incognito mode and was able to reproduce the same behavior. However, the I was prompted upon the first logon while in Incognito mode. The web application behaves as expected in Internet Explorer 10. Some info about the application: It's a Sharepoint site using NTLM authentication The credentials are Active Directory-based, as the username is domain\username My connection is over the Internet and there is no AD relationship between my local Windows account, my Windows PC. In other words I (meaning my locally logged on user and my PC) are not in any way part of their AD domain. The site is running SSL on port 443 Why might Chrome be automatically authenticating me?

    Read the article

  • How to clear unshown OSD notifications

    - by Avatar Parto
    I am a webmaster of 2 active websites. Every time I start Thunderbird, it downloads all the new messages from the admin mail addresses of these sites - [email protected], [email protected], [email protected], etc. There are about 20 of them - this includes my personal email accounts. Now, am not always connected to the internet and there arises cases where I need to access already downloaded mails. Thunderbird then tries to connect to the internet, which it can't coz am offline, and then displays about twenty OSD notifications, one after the other, telling me that connection for each email address has failed. That's frustrating and it really eating me up. Am looking for a way, either: 1. To tell Thunderbird NOT to connect to the internet when am offline, or 2. To clear all these notifications after like the first one shows. And if it will help, am using ubuntu 13.04 and Thunderbird 24.0 and I always keep my computer fully updated. Any help is welcome. Thanxs.

    Read the article

  • SharePoint 2010 MySites - Simple explanation needed!

    - by Chris W
    I've been playing around with the 2010 beta for a couple of weeks, experimenting with topology options etc. I think I've got myself totally confused as to how it works hence if there's any SP experts out there that can explain things in simple terms for me I'd appreciate it! I want to setup a farm with 3 servers providing the content & MySites. I presume that the way to do this is to load balance or DNS round robin traffic between the 3 servers. The bit where I'm confused is that My Site Settings page asks for a specific My Site Host hence all my site traffic will be pushed to a single server even though we have 3 in the farm. If this hosts fails I presume MySites will be unavailable. Is this right? How do I configure it so that access to MySites is load balanced across the 3 servers in the farm?

    Read the article

  • Fortigate VPN Routing issue

    - by user1571299
    I have 200B Fortigate unit with 2 internet WAN connections. I also have a remote site which I'm connected to via IPSEC VPN through WAN1. This site has only one GW IP address. I'd also like to setup a VPN ontop of WAN2 with that specific site as it's destination. The default route for my end is WAN1. My problem is I cant figure out how to have both tunnels up at the same time. What's the best practice for achieving this? Thanks

    Read the article

  • shut down FTP from IIS 6 after <X> failed login attempts

    - by Justin C
    Is there a setting in IIS 6 to turn an FTP site off after a specified number of failed login attempts? It has already been documented on this site that a Windows server sitting on a static IP address can record tens of thousands of failed login attempts a month. One server I maintain has had tens of thousands of attempts made against the FTP port. I have solid passwords in place, so I am not overly concerned. I rarely have to use the FTP, so for the most part I turn it on and off as I need it. Sometimes though I forget to turn it off when I am done, only to find the next day that my EventLog is full of audit failures. I would want to set a high number, in case I just messed up the password. Something like if 50 failed login attempts happen, just turn off the FTP site. Then if I need it later I can just start it again.

    Read the article

  • SharePoint Returning a 401.1 for a Specific User/Computer

    - by Joe Gennari
    We have a SharePoint Services 3.0 site set up supporting about 300 users right now. This report is isolated and has never been duplicated. We have one AD user who cannot log into the SharePoint site with his account from his machine and is subsequently returned a 401.1 error. If any other user tries to log on with their account from his machine, it works okay. If he moves to another machine and logs on, it works okay. The only solution to this point has been to install FireFox on the machine. When he authenticates with FF, everything is okay. Remedies tried so far: Cleared cookies/cache Turned off/on Integrated Windows Authentication in IE Downgraded IE 8 to IE 6 Removed site from Intranet Sites zone Renamed the machine Disjoined/Rejoined Domain

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

  • About Web Server

    - by Chathura JAyanath
    I develop web sites. and i'm hope host my web site on my own web server. then i have ms win server 2008 computer with apache and IIS. i host my web site on iis. then it visible in local area network. now i want to open it to world. i have my own web url (www.myurl.lk). domain name provider ask about name server 1 and name server 2. i don't know how can i find my name server. can any one help me to do this? i already try this using upload web site to iis. but it's not work. it can see only local area network.

    Read the article

  • I bought a domain name at GoDaddy and hosting at Dreamhost but the first doesn't work!

    - by janooChen
    I added the Dreamhost's nameservers like 12 hours ago to: I entered to the following panel: Nameservers -> Set Nameservers (I have specific nameservers for my domains) and added Dreamhost's nameservers liek this: Nameserver 1: NS1.DREAMHOST.COM Nameserver 2: NS2.DREAMHOST.COM Nameserver 3: NS3.DREAMHOST.COM So now in the admin panel I see this: Nameservers Nameservers: (Last Update 2/10/2011) NS1.DREAMHOST.COM NS2.DREAMHOST.COM NS3.DREAMHOST.COM But I get this when I run the analysis tools: Attention Required! There are critical issues Accessing Your Web Site Accessing Your Web Site Properly configuring your domain name and hosting account ensures that visitors can access your site. Did I do something wrong or I have to wait 24 to 48 hours? (Dreamhost does display my page because I can access the other domain name I bought together with the hosting) (By the way, if everyone uses the same nameserver, how will go GoDaddy know which is the hosting space that I purchased among all others)? Thanks in advance.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >