Search Results

Search found 12397 results on 496 pages for 'maybe'.

Page 51/496 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How to create a timer the right way?

    - by mystify
    I always have an class which needs to set up a timer for as long as the object is alive. Typically an UIView which does some animation. Now the problem: If I strongly reference the NSTimer I create and invalidate and release the timer in -dealloc, the timer is never invalidated or released because -dealloc is never called, since the run loop maintains a strong reference to the target. So what can I do? If I cant hold a strong ref to the timer object, this is also bad because maybe I need a ref to it to be able to stop it. And a weak ref on a object is not good, because maybe i'm gonna access it when it's gone. So better have a retain on what I want to keep around. How are you guys solving this? must the superview create the timer? is that better? or should i really just make a weak ref on it and keep in mind that the run loop holds a strong ref on my timer for me, as long as it's not invalidated?

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • How to Remove UAC Icon Overlays (Blue-Yellow Shields) in Windows 7

    - by nocturne
    Over some of my program shortcuts and install files, there is an uac icon overlay (blue-yellow shield) and I find them really ugly. Is there any way to get rid of them please? Edit I closed UAC to see if overlays go away or not. Now it's open. But maybe there is a way to remove them without closing the UAC like removing shortcut arrows in this question: Remove shortcut icon overlay from shortcuts on Windows 7

    Read the article

  • How to rearm Microsoft Office in 2010 Information Worker Demonstration and Evaluation Virtual Machine (SP1)

    - by John Assymptoth
    I'm doing some tests in a 2010 Information Worker Demonstration and Evaluation Virtual Machine (SP1). However, after a few days (maybe ~180), Office is now saying that it needs to be activated. I've tried rearming with OSPPREARM.EXE, but I get the following error: "The security processor reported that the maximum allowed number of re-arms has been exceeded. You must re-install the OS before trying to re-arm again." How can I circumvent this, without losing all the data I have in the VM?

    Read the article

  • Bandwidth Limit for IIS FTP 7.0(or 7.5)

    - by oruchreis
    Hi, FTP Server 7 doesn't support to set bandwidth limit on upload or download. Is there any way to limit bandwidth of both upload or download? Is there any extension or plugin to have this feature. I don't want to use third party ftp servers. Maybe, there is an extension for IIS to limit. Edit: I want to limit per user or virtual directory. Regards.

    Read the article

  • SharePoint webpart for WebEx

    - by Kelly French
    Is there a SharePoint webpart available for WebEx? We do a lot of web conferencing and want the functionality to be exposed through SharePoint but WebEx hasn't released a webpart yet. The solution provided by WebEx has its critics. I searched for 'SharePoint' in Cisco's WebEx knowledgebase and got back zero (0) results. Has anyone found either a workaround or maybe a third-party webpart?

    Read the article

  • What does the condition "new pull" mean?

    - by Nathan DeWitt
    I'm looking for a hard drive, and some of the conditions are listed as "New Pull" or "System Pull". I figure the System Pull means "taken from a computer and now sold separately" but what does New Pull mean? Does this mean it was assembled and never used? Or maybe it has been freshly pulled from a used machine?

    Read the article

  • Remote Email Access?

    - by Tyler
    I have remote email access from an iPhone or my Android phone, but I cannot setup a Windows Email Client to check my email using the exact same information I provided in my phones. The email system is an Exchange 2003 and I hate using the cheap Outlook Web App that it has. User: [email protected] Password: 1234 Server: mail.domain.com And that works for they phones. So why can't I get it to work on my email client? Maybe a DNS problem?

    Read the article

  • uploading php files into my root folder and sending spam

    - by Mustafa Oenal
    i do not understand how but someone is uploading a php file into the public_html directory of my CentOS 6 server like statisticsuQPo.php this php file gives me "linux10+cfcd208495d565ef66e7dff9f98764da" and it is sending spam mail's without end. i have remove the file maybe 10 times but i do got it back every day. how can i solve this problem? is there anything wrong with my apache configuration?

    Read the article

  • upgrade from windows 2008 server CORE to full windows 2008 server

    - by laurens
    Possible Duplicate: Install GUI on Windows Server 2008 Core As I've seen there is not really a topic about this here... My question: Is there any means to upgrade from windows 2008 server CORE to full windows 2008 server? The server is used as Hyper-V Host machine. On the internet mostly I find: "no you'll have to reinstall" But maybe there's a workaround? Thanks in advance

    Read the article

  • MAMP Pro Uninstaller Throws "The privileged action failed." Error Even After Entering Password

    - by BigM
    I have followed, and successfully completed, every step in this SO article to remove MAMP Pro in favor of installing the free version as I only need it for one site anyway. However, when I run the uninstaller I still get "The privileged action failed." after providing the password. Can anybody shed a little light on this maybe? I'm just trying to get MAMP Pro uninstalled so I can install the free MAMP stack.

    Read the article

  • Open file - Security warning

    - by joker
    Does anyone know how to disable the unknown publisher security warning when running an application in Windows Xp Home? It's pretty annoying to have to click run everytime... I have tried: Run gpedit.msc, and go to Local Computer Policy-User Configuration-Administrative Templates-Windows Components-Attachment Manager and enable "Default risk level for file attachments", and then enable "Inclusion list for low risk file types" and add to this list the file extensions that you want to open without triggering this crap. But this file 'gpedit.msc' doest not exist on my computer, i checked system32 folder also =/ maybe its for xp pro

    Read the article

  • configure squid with windows 2008

    - by G.a.r.y.
    Hi my problem is this: I have a 3 pc (192.168.1.2,..3,..4) and a windows 2008 server (192.168.1.100) router is 192.168.1.1. I just want that the 3 pc set like gateway 192.168.1.100, are filter by squid proxy loaded in win2008 so in win2008 I 've set in control panel the proxy 192.168.1.100:3128 and in win2008 browser work, the connection is filtered by proxy, but in 3 pc not works, so maybe I should route all incoming request into squid, but I dunno how ... thanks

    Read the article

  • how to make a multiboot usb key ?

    - by zillion
    I wanna cut my 8 gb usb key into several partitions to use wintoflash for a windows xp (maybe nlited before) and I wanna put also the Framakey ubuntu-fr remix pack into it has the second bootable OS and tweak and mod it a little cause if I can I wanna switch ubuntu 9.04 included to the LTS version ... So someone know how to do it easily ??? IMP : in short I wanna make a dual-boot usb key with windows xp sp3 and ubuntu 8.04.3 LTS ...

    Read the article

  • Remove ads from Windows Live Messenger

    - by Mehper C. Palavuzlar
    How can I remove the ads from Windows Live Messenger build 14.0.8089.726? Maybe there is a registry setting for that? Googling this brings lots of results with some applications full of viruses and malware. Please suggest something that you have tried yourself (on Windows 7) and confirmed as successful.

    Read the article

  • Why sometimes Windows cannot kill a process?

    - by Néstor Sánchez A.
    Right now I'm trying to Run/Debung my app on VisualStudio, but it cannot create it because the las instance of the app.vshost.exe is still running. Then, by using the Task Manager i'm trying to kill it, but it just remains there with no signal of activity. Beyond that particular case (maybe a VS bug), i'm very curious about the technical reasons why sometimes Windows cannot kill a process??? Can, an enlighted OS related developer, please try to explain? (And please don't start a Unix/Linux/Mac battle against Windows)

    Read the article

  • ssh: "Agent admitted failure to sign using the key"

    - by takeshin
    I'm trying to set up password-less login with ssh on Ubuntu Server, but I keep getting: Agent admitted failure to sign using the key and prompt for password. I have generated new rsa keys. Before the system reboot it worked just fine. All the links lead me to this bug, but nothing works. SSH Agent is still not running. How to fix that? Maybe the files need specific permissions?

    Read the article

  • Who uses Zimbra Collaboration Suite and why?

    - by AlberT
    I am really curious about other people experiences and choices. After a long scouting, I found ZCS to be a really impressive solution, maybe the only real alternative to M$ Exchange. I'm very interested in opinions and case histories from users having already deployed Zimbra on their infrastructure or planning to do it. Both Community and Network edition cases are appreciated, pro and cons explained too :) Zimlets, addons, useful skins, Zimbra Desktop and other apps or mobile integration use case too of course.

    Read the article

  • Why does Ubuntu feel so sluggish on my asus 1000HE netbook

    - by Pete Hodgson
    I recently purchased a nice asus 1000HE, and installed Ubuntu NBR. However, I'm pretty disappointed with how sluggish it feels. I'm wondering if I maybe need to install a closed-source graphics driver - it feels similar to how my work laptop performed before I installed the restricted nvidia driver on that machine. [EDIT] In case it's any use: pete@eliza:~$ uname -a Linux eliza 2.6.28-12-netbook-eeepc #43 SMP Mon Apr 27 16:06:05 MDT 2009 i686 GNU/Linux

    Read the article

  • Deleting entire lines in a text file based on a partial string match with Windows PowerShell

    - by Charles
    So I have several large text files I need to sort through, and remove all occurrences of lines which contain a given keyword. So basically, if I have these lines: This is not a test This is a test Maybe a test Definitely not a test And I run the script with 'not', I need to entirely delete lines 1 and 4. I've been trying with: PS C:\Users\Admin (Get-Content "D:\Logs\co2.txt") | Foreach-Object {$_ -replace "3*Program*", ""} | Set-Content "D:\Logs\co2.txt" but it only replaces the 'Program' and not the entire line.

    Read the article

  • Extracting data from Visual FoxPro databases

    - by whitequark
    I just got some 20Gb of data in a Visual FoxPro database with a custom frontend probably written in the same framework, and need to extract that data in any well-known format. I don't know anything about VFP in particular, but as it is SQL, there should be a way of opening an SQL console, or maybe an vfpdump utility. How can I do that? Everything I have now are a bunch of obscure binary files and a frontend executable.

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >