Inevitably I'll stop using an antiquated css, script, or image file. Especially when a separate designer is tinkering with things and testing out a few versions of images. Before I build one myself, are there any tools out there that will drill through a website and list unlinked files? Specifically, I'm interested in ASP.NET MVC sites, so detecting calls to (and among many other things) @Url.Content(...) is important.
how can I backup and restore my horde sql database
The Databse ist located in /var/lib/mysql/horde
There are many *.frm files and one db.opt
My Server ist broken, so I want to reinstall it.
Can i copy this files to a USB Stick, then restore the entire server without horde and then simply copy the files in the same directory?
Or habe I do something like "mysqldump" to delete an reinztall the database?
Thank you,
Thomas
I was checking out one of my company's website's Webmaster Tools to analyze the cause behind some soft 404 errors and discovered that a few of the older errors had affiliate mp referral tags listed as the relative URLs. Since these are older problems and I don't seem too many of them coming up in the last few months I don't think it's still a problem. I'm just curious if it's possible to cause a soft 404 by improperly copying the campaign or referral tag into the URL.
Whereas simple workflows are possible using Microsoft Office SharePoint Designer, you will soon reach the point where you will need to use Visual Studio. In the third article in Charles' introduction to Workflows in Sharepoint, he demonstrates how to create a workflow from scratch using Visual Studio, and discusses the relative merits of the two tools for this sort of development work.
im getting a 404 crawl error for mailto:[email protected], in google webmaster tools under the health crawl errors
surly google should see that mailto: is related to an email not a webpage..
the html im using for the mailto on my page is <a href="mailto:mailto:[email protected]">[email protected]</a>
Whats the best way to resolve this ? Is mailto still widely used or is there a newer alternative ?
Obviously some cleanup tools work better than others…and sometimes common sense cleanup is the best tool of all! Note: Notice the timeline in the image… View the Full-Size Version of the Image Got a sales guys laptop back… Note the times. [via Fail Desk] The Best Free Portable Apps for Your Flash Drive Toolkit How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC
I originally wanted to write this post in one, but there is quite a large amount of information which can be broken down into different areas, so I am going to publish it in three posts. Minification and Concatination of JavaScript and CSS Files – this post Versioning Combined Files Using Subversion – published shortly Versioning Combined Files Using Mercurial – published shortly Website Performance There are many ways to improve web site performance, two areas are reducing the amount of data that is served up from the web server and reducing the number of files that are requested. Here I will outline the process of minimizing and concatenating your javascript and css files automatically at build time of your visual studio web site/ application. To edit the project file in Visual Studio, you need to first unload it by right clicking the project in Solution Explorer. I prefer to do this in a third party tool such as Notepad++ and save it there forcing VS to reload it each time I make a change as the whole process in Visual Studio can be a bit tedious. Now you have the project file, you will notice that it is an MSBuild project file. I am going to use a fantastic utility from Microsoft called Ajax Minifier. This tool minifies both javascript and css. 1. Import the tasks for AjaxMin choosing the location you installed to. I keep all third party utilities in a Tools directory within my solution structure and source control. This way I know I can get the entire solution from source control without worrying about what other tools I need to get the project to build locally. 1: <Import Project="..\Tools\MicrosoftAjaxMinifier\AjaxMin.tasks" />
2. Now create ItemGroups for all your js and css files like this. Separating out your non minified files and minified files. This can go in the AfterBuild container.
1: <Target Name="AfterBuild">
2:
3: <!-- Javascript files that need minimizing -->
4: <ItemGroup>
5: <JSMin Include="Scripts\jqModal.js" />
6: <JSMin Include="Scripts\jquery.jcarousel.js" />
7: <JSMin Include="Scripts\shadowbox.js" />
8: </ItemGroup>
9: <!-- CSS files that need minimizing -->
10: <ItemGroup>
11: <CSSMin Include="Content\Site.css" />
12: <CSSMin Include="Content\themes\base\jquery-ui.css" />
13: <CSSMin Include="Content\shadowbox.css" />
14: </ItemGroup>
1: <!-- Javascript files to combine -->
2: <ItemGroup>
3: <JSCat Include="Scripts\jqModal.min.js" />
4: <JSCat Include="Scripts\jquery.jcarousel.min.js" />
5: <JSCat Include="Scripts\shadowbox.min.js" />
6: </ItemGroup>
7: <!-- CSS files to combine -->
8: <ItemGroup>
9: <CSSCat Include="Content\Site.min.css" />
10: <CSSCat Include="Content\themes\base\jquery-ui.min.css" />
11: <CSSCat Include="Content\shadowbox.min.css" />
12: </ItemGroup>
3. Call AjaxMin to do the crunching.
1: <Message Text="Minimizing JS and CSS Files..." Importance="High" />
2: <AjaxMin JsSourceFiles="@(JSMin)" JsSourceExtensionPattern="\.js$"
3: JsTargetExtension=".min.js" JsEvalTreatment="MakeImmediateSafe"
4: CssSourceFiles="@(CSSMin)" CssSourceExtensionPattern="\.css$"
5: CssTargetExtension=".min.css" />
This will create the *.min.css and *.min.js files in the same directory the original files were.
4. Now concatenate the minified files into one for javascript and another for css. Here we write out the files with a default file name. In later posts I will cover versioning these files the same as your project assembly again to help performance.
1: <Message Text="Concat JS Files..." Importance="High" />
2: <ReadLinesFromFile File="%(JSCat.Identity)">
3: <Output TaskParameter="Lines" ItemName="JSLinesSite" />
4: </ReadLinesFromFile>
5: <WriteLinestoFile File="Scripts\site-script.combined.min.js" Lines="@(JSLinesSite)"
6: Overwrite="true" />
7: <Message Text="Concat CSS Files..." Importance="High" />
8: <ReadLinesFromFile File="%(CSSCat.Identity)">
9: <Output TaskParameter="Lines" ItemName="CSSLinesSite" />
10: </ReadLinesFromFile>
11: <WriteLinestoFile File="Content\site-style.combined.min.css" Lines="@(CSSLinesSite)"
12: Overwrite="true" />
5. Save the project file, if you have Visual Studio open it will ask you to reload the project. You can now run a build and these minified and combined files will be created automatically.
6. Finally reference these minified combined files in your web page.
In the next two posts I will cover versioning these files to match your assembly.
I am getting DNS error on google webmaster tools. And even after testing with this http://dnscheck.pingdom.com/?domain=ansoftsys.com×tamp=1372108107&view=1
Name Server details
Here is a screenshot my DNS management page
How to solve this issue?
And my DNS error image is below generated from this link http://dnscheck.pingdom.com/?domain=ansoftsys.com×tamp=1372108107&view=1
MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression.
In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time.
InnoDB uses zlib for compression.
InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page.
When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically.
Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions.
DELETE (and ROLLBACK of DELETE) and purge can be
performed without recompressing the page. This is because the
delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are
stored in uncompressed format on the compressed page. A record can be
purged by shuffling entries in the compressed page directory. This can
also be useful for updates of indexed columns, because UPDATE of a key
is mapped to INSERT+DELETE+purge.
A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages.
Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed.
innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding.
innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the maximum percentage of uncompressed data page that can be reserved as pad.
We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more.
Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data.
It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data.
I have two questions:
Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day).
Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed.
Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking.
Noteworthy qualities we have in place:
page download speed is pretty good (250-500 ms)
no errors (no 404 or 500 errors when getting spidered)
we use Google webmaster tools and login daily
friendly URLs in place
I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video).
Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page.
Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages.
We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.
I saw lots of question about this topic and all of them were talking if someone want to use php for "building web pages", should learn html first or not.
and most of them said yes, because most of the time you make web page with both php and html (and maybe css).
But If I just want to use php for contacting to My Database (for example MySQL) and nothing more, shuold I learn any html or CSS first or not?
Are there developpers out there, who (ab)use the CaptureScreenshot() function
of their automated gui-tests to also create uptodate-screenshots for the userdocumentation?
Background: Whithin the lifetime of an application, its gui-elements are constantly changing.
It makes a lot of work to keep the userdocumentation uptodate, especially if the example data
in the pictures should match the textual description.
If you already have automated bdd-gui-tests why not let them take screenshots at certain points?
I am currently playing with webapps in dotnet+specflow+selenium, but this topic also applies to
other bdd-engines (JRuby-Cucumber, mspec, rspec, ...)
and gui-test-Frameworks (WaitN, WaitR, MsWhite, ....)
Any experience, thoughts or url-links to this topic would be helpfull.
How is the cost/benefit relation? Is it worth the efford? What are the Drawbacks?
See also:
Is it practical to retroactively write specifications documenting a system via automated acceptance tests?
www.google.com/webmasters/tools
when I use google webmaster tool to check one of my websites,
I have found that
there are 37 links under sitelinks which do not belong to my websites at all.
the sitelinks should be the inner pages of my website !
I have not used google webmaster tool for half year and I know these links are hacking results or attempts, I wonder
Please tell me how to delete them and how to prevent this types of hacking.
I am checking google webmaster tools. I entered the search queries section. There i found alot keywords and their impression and ctr etc. I clicked on one of the query keyword there it shows the keyword and position in search result, but when i go to google.com and type the specified keyword it shows no impressions too...
how do i measure find my site's impression on google.com
my site: http://www.trekkingandtoursnepal.com
keyword: trekking nepal
Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/06/25/quick-hint-formatting-json-for-debugging.aspxI needed a way to quickly format JSON that I copied from the Network view in Google Chrome Developer Tools. A co-worker pointed me to the Notepad++ (or use Chocolatey to install Notepad++) plugin JSMin. Now all I have to do is copy the JSON into Notepad++ and Alt + Ctrl + M and I can see it easily.
Bonjour à tous,
Je vous propose un nouvel article rapide, intitulé "Introduction à JPA, application au chargement de données depuis une base MySQL" et disponible à l'adresse suivante :
http://thierry-leriche-dessirier.dev...sql-jpa-intro/
Ce miniarticle montre (par l'exemple) comment charger des données depuis une base MySQL, à l'aide de JPA (Java Persistence API), en quelques minutes et en nous limitant aux fonctionnalités simples.
Attention : La techno JPA (Java Persistence API) est relativement complexe. Dans cet article, nous n'abordons que les points faciles. Ceci n'est donc pas un tutoriel complet m...
The question is about delivering working code faster without any regard for design, quality, maintainability, etc.
Here is the list of things that help me to write and read code faster:
Language: static typing, support for object-oriented and functional
programming styles, embedded documentation, short compile-debug-fix
cycle or REPL, automatic memory management
Platform: "batteries" included (text, regex, IO, threading,
networking), thriving community, tons of open-source libs
Tools: IDE, visual debugger, code-completion, code navigation,
refactoring
I'm in charge of a relatively big corporate website (circa 95K pages) and need to perform a cookie audit. I can see cookies issued on a per-page basis with Chrome or Firefox console, but given the amount of pages I need a tool to automate the process.
I tried to google for website cookie scanner but my search was unfruitful and found:
either online tools which only scan the home page
paid services (ex1, ex2)
Does any of you know about a tool to scan an entire website and generate a report showing which cookies are being used and which page set them?
It's hard when a piece of marketing software offers an affiliate program to ever find an objective review of it, so I thought I might try on Quora. It just boggles my mind that it can only cost $97 flat, when other SEO or keyword research tools like Wordtracker cost almost the same PER MONTH, and don't seem to offer much, if anything, more... Can anyone explain this, and would anyone recommend Market Samurai WITHOUT posting a link to it in their review? :)
I have installed a fresh desktop with ubuntu quantal and the following packages versions:
MySQL: 5.5.28-0ubuntu0.12.10.1
Apache: 2.2.22 (Ubuntu)
phpMyAdmin: 3.4.11.1deb1
I would like phpmyadmin to display all the queries I run. How can I do it?
Thank you very much.
UPDATE
To be more specific, there are some queries showing into the query box, but I would like phpMyAdmin to show ALL of them including when I export a Database (if possible). Thank you.
I previously was using Rhythmbox in 10.04 and recently installed it (version 2.90.1) in my system now running 11.10. I've discovered the following issues:
Sometimes if I start Rhythmbox from the command line e.g. rhythmbox [uri of radio station], the GUI does not appear although I get the audio stream and I am not able to access the GUI when I click on the icon in the Unity launcher. Previously in 10.04, I was able to access the GUI after starting from command line by clicking the icon in the notification tray but it no longer appears there.
Sometimes after running Rhythmbox from command line, when I click on the icon in the Unity launcher the GUI does not appear (even though I am clicking with the middle button on my mouse) and an icon-sized space appears under the Rhythmbox icon in the launcher. When I right click this space, I get a menu with a blank line followed by "Keep in launcher".
Although I can play the uri's linking to .m3u and .pls files for radio streams in the GUI, they do not work in the command line. Instead I have to download the .m3u & .pls files to get the uri inside those files and use that as the argument instead when running from command line.
Is there any way to fix these issues?
I am trying to get Google to index an AJAX site (davidelifestyle.com). It's crawlable with JavaScript turned off and I have also recently implemented _escaped_content_ snapshot mechanism but all that's indexed is a home page and PDF files that are not directly available from the home page. Also when I use Fetch as Google in Webmaster Tools, it downloads the dynamic page but does not index it ("Submit to Index" just reloads the page).
Any ideas what might be wrong?