Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 478/1209 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • An innovative approach to develop web forms - comparison with ASP.NET and MVC

    The article introduces an innovative approach to develop web forms in enterprise software rather than either ASP.NET or MVC through step by step comparison on development complexity, reusability, performance and maintainability. The approach is implemented as an important UI component of RapidWebDev...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Do I need a ssl certificate if just pointing my domain to Cloudfront?

    - by hashpipe
    I have a website running on a domain (e.g site.com). I have an additional domain(e.g sitecdn.com) which basically points to Amazon Cloudfront for delivery. Amazon Cloudfront in turn basically fetches the data from the main domain (site.com). I use this setup primarily to have multiple subdomains of my sitecdn.com to point to assets via the cdn. The main website has a ssl certificate, and I intend to put all assets served from the cdn as https links only. Something like <img src="https://img.sitecdn.com/image.jpg" /> I'm a little confused whether I need a ssl for my cdn domain. In cloudfront I can set the setting to allow both https and http traffic. Do I need a ssl certificate for this ? If yes, then where do I install the ssl certificate, since I don't have a server for sitecdn.com.

    Read the article

  • When SQL Server Nonclustered Indexes Are Faster Than Clustered Indexes

    SQL Server Clustered indexes can have enormous implications for performance of operations on a table. But are there times when a SQL Server non-clustered index would perform better than a clustered index for the same operation? Are there any trade-offs to consider? Check out this tip to learn more. Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • Beware of Link Pedlars Bearing Gifts!

    As more organisations become aware of the value of links the number of links pedlars is increasing. On the face of it these people offer an easy, if sometimes expensive, way to improve your website's performance in the search engines.

    Read the article

  • Do I need social networks to be an expert developer? [closed]

    - by Gerald Blizzy
    This question may sound odd, but do I need twitter, facebook and google+ if I am a web-developer? I see many expert developers nowadays using it in working order. It seems like it's harder to stay in touch with customers, co-workers and potential customers if you don' use social networks. Am I right? Reason why I ask is that I am totally not a facebook/twitter person, I find it boring and annoying. I understand that linkedin is usefull for career, but what about twitter and facebook? Are they needed for web-developer career? What I am trying to ask is if I only use linkedin, own portfolio website, google talks, gmail and something like github, would I actually miss anything professionally/job-wise? My thoughts are that I can just have my portfolio website where I list all my projects aswell as contacts page with my google talks/gmail account. It can suit both fulltime job, freelance and own projects. So this way email and google talks is just enough. Am I right or not? Thanks in advance!

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

  • The .NET 4.5 async/await Commands in Promise and Practice

    The .NET 4.5 async/await feature provides an opportunity for improving the scalability and performance of applications, particularly where tasks are more effectively done in parallel. The question is: do the scalability gains come at a cost of slowing individual methods? In this article Jon Smith investigates this issue by conducting a side-by-side evaluation of the standard synchronous methods and the new async methods in real applications.

    Read the article

  • test post, not public

    test test test more test more test more test more test This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • Erlang web frameworks survey

    - by Zachary K
    (Inspired by similar question on Haskel) There are several web frameworks for Erlang like Nitrogen, Chicago Boss, and Zotonic, and a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks?

    Read the article

  • Books / blogs that discuss service excellence? [closed]

    - by Bogdan Gavril
    I'm looking for information about service excellence topics such as: 0 downtime deployment how to deal with versioning (backward and forward compatibility) environment strategies (how many staging envs ? etc.) performance testing testing in production monitoring I am looking at the Microsoft stack, but the concepts should be the same everywhere. Do you have any recommendations of books or blogs on the subject? PS: I have found some good articles from I.M.Wright's "Hard Code" blog. Anything else?

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • Why does DVD playback still not work after installing libdvdcss2?

    - by mac9416
    I have installed libdvdcss2, but I still get this error when trying to play DVDs: libdvdread4 was installed by default (This is a new System76 Pangolin Performance). I ran the install-css.sh script, and it completed with no problems. I can confirm that libdvdread4 and libdvdcss2 are installed: mac9416@charlotte:~$ dpkg -l | grep dvdcss ii libdvdcss2 1.2.12-0.0medibuntu1 Simple foundation for reading DVDs - runtime libraries mac9416@charlotte:~$ dpkg -l | grep dvdread ii libdvdread4 4.2.0-1ubuntu3 library for reading DVDs

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • What is difference between install desktop-environment and run directly distro?

    - by Pandya
    My question is what is difference between installing perticular desktop environment on Ubuntu And Using directly that (Default -environmented) distro/flavour of Ubuntu? Example: Two options: Install ubuntu-gnome-desktop or kubuntu-desktop or xubuntu-desktopetc. (official & recognized derivatives) alternatively on Ubuntu (Default -Unity Desktop) Use (Run) perticular distro/flavour Ubuntu-Gnome or Kubuntu or Xubuntu etc. I want to know is both method working same performance? and which is proper method to use Desktop Environment.

    Read the article

  • Subdomain still times out after being set up a month ago

    - by user8137
    I would like to use the subdomain www.high-res.domain.com to be accessed by external customers with specific permissions to access the site (like FTP). We use Network Solutions to house domain.com. We recently added a new IP address to point to www.high-res.domain.com. I gave the IP address to the company that hosts our website. I pinged www.high-res.domain.com and it points to the correct IP address but still times out. It’s been a few weeks now and when you ping it, it still times out. C:\>ping XXX.XXX.X.XXX Pinging XXX.XXX.X.XXX with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for XXX.XXX.X.XXX: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss). tracert times out as well. I even went to DNS tools and a few other sites for checking this and it shows the same thing. I recently went into the DNSmgmt on our server (wink2k3sp1) and created an A record under the DomainDnsZones which translated to a CNAME when you look at it. Under the domain it has two entries, one to the subdomain and the other to the website host. Each has separate IP addresses. Is this correct?

    Read the article

  • Live chat solutions

    - by Lèse majesté
    What good live chat/live help solutions are available (preferably for use on a site hosted on a LAMP stack and free)? I'm looking for a way to allow our sales and customer service reps to talk directly with visitors to our site. I've looked at phpopenchat, but it looks very unpolished. The only other free live chat app I've come across looked egregious. The aesthetics and UI design alone made me shudder to think what the underlying code might look like. This isn't a critical feature, and it wouldn't be hard to code up myself, so I'm not really looking for commercial software or paid services (unless there's a really compelling reason to use them). I'm just wondering if any other webmasters have come across a satisfactory free/open source solution for providing live customer support on their website. As a side note, live voice chat would also be an option, but it has to be be designed (or customizable) for customer support rather than a public chatroom. Edit: Looking at the responses, it looks like there probably aren't going to be many free solutions for this type of business-oriented chat solution, so feel free to post answers even if they are commercial solutions as long as they're a good value. Also feel free to post any alternate live support solutions (such as the Skype recommendation) that could be in someway integrated with a website. This will give me a good lay of the land for what people are actually using for live support, and I think will be more helpful to others reading this question.

    Read the article

  • International search: how to show different domains in Google+ Local?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for people in the US, example.co.uk for UK users, example.de for Germany, example.fr for France, etc. Searching for certain city keywords will return a list of Google+ Local (formerly Places): Each links to the corresponding company website that is visible. Problem When searching on www.google.de, the domain of the site intended for US users (example.com) appears instead of the corresponding ccTLD (example.de) aimed at German users. This applies to all languages. In my opinion and for the purposes of this business, it's not good user experience: searchers would most likely prefer to book on a site localized for them (e.g. in their language and currency). Question Is it possible to return different ccTLDs in these local search listings for users across the globe? Currently, Google+ Local seems to only support supports adding a single "Website" field. Solutions I have considered Creating duplicate Google Places listings for each URL would be spammy (and not viable when there's 100s of locations, each needing a listing in 8 languages). I don't see the hreflang annotation helping either, and GWMT geotargeting is already set.

    Read the article

  • Free eBooks - SQL Server and other Microsoft Technologies

    - by Greg Low
    Great to see the advice from Gail Erickson about the release of a number of SQL Server related eBooks on the new Microsoft eBook Gallery site. It's good to see this sort of content moving over to eBook formats.The e-books that are currently available include: SQL Server 2012 Transact-SQL DML Reference Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery QuickStart: Learn DAX Basics in 30 Minutes Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide You'll find details of them here: http://social.technet.microsoft.com/wiki/contents/articles/11608.e-book-gallery-for-microsoft-technologies.aspx

    Read the article

  • Windows XP Home Edition SP3 cant recognise PCMIA SD Card

    - by Pozo
    System Specifications: Laptop : Dell Inspiron 6000 OS: Windows Home Edition SP3 SD Adapter: Hagiwara Smart Media Adapter I inserted the card into the slot, windows xp recognises the device, lists the pcmia controller on the device manager list, an entry appears under the IDE ATA/ATAPI category on the device manager as well. However, the device does not show under my computer and the driver does not get assigned a letter number. I checked the system logs from the device manager and there were no logged errors. Checking the Hagiwara support website, the manufacturer indicates that the adapter driver is the same as the windows xp pcmia controller. Checking Dell's website, no specific drivers were listed for that either. General Search on the web indicates that multiple people face similar problems with their SD cards, yet none actually spell out the route issue that causes this. Please let me know if you have any suggestions for further debugging. Thanks in advance

    Read the article

  • Why is my laptop so sluggish? Or Damn You Facebook and Twitter! Or All Hail Chrome!

    - by John Conwell
    In the past three weeks, I've noticed that my laptop (dual core 2.1GHz, 2Gb RAM) has become amazingly sluggish.  I only uses for communications and data lookup workflows, so the slowness was tolerable.  But today I finally got fed up with the suckyness and decided to get to the root of the problem (I do have strong performance roots after all). It actually didn't take all that long to figure it out.  About a year ago I converted to Google Chrome (away from FireFox).  One of the great tools Chrome has is a "Task Manager" tool, that gives you Windows Task Manager like details for all the tabs open in the browser (Shift + Esc).  Since every tab runs in its own process, its easy from Task Manager (both Windows or Chrome) to identify and kill a single performance offending tab.  This is unlike IE, where you only get aggregate data about all tabs open.  Anyway, I digress.  Today my laptop sucked.  Windows Task Manager told me that I had two memory hogging Chrome tabs, but couldn't tell me which web page those tabs are showing.  Enter Chrome Task Manager which tells you the page title, along with CPU, memory and network utilization of each tab.  Enter my amazement.  Turns out Facebook was using just shy of half a Gb of RAM.  Half a Gigabyte!  That's 512 Megabytes!524,288 Kilobytes! 536,870,912 Bytes!  Or 4,294,967,296 Bits!  In other words, that's a frackin boat load of memory.  Now consider that Facebook is running on pretty much 96.3% (statistics based on absolutely nothing) of every house hold desktop, laptop, netbook, and mobile device in America, that is pretty horrific! And I wasn't playing any Facebook games like FarmWars or MafiaVille.  I just had my normal, default home page up showing me who just had breakfast, or just got finished with their morning run. I'm sorry...let me say that again...HALF A GIG OF RAM!  That is just unforgivable. I can just see my mom calling me up:  Mom: "John...I think I need a new computer.  Mine is really slow these days" John: "What do you have running?" Mom: "Oh, just Facebook" John: "Ok, close Facebook and tell me how fast your computer feels" Mom: "Well...I don't know how fast it is.  All I do is use Facebook" John: "Ok Mom, I'll send you a new computer by Tuesday" Oh yea...and the other offending web page?  It was Twitter, using a quarter of a Gigabyte. God I love social networks!

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >