Search Results

Search found 40933 results on 1638 pages for 'database tools'.

Page 31/1638 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • What are some best practises and "rules of thumb" for creating database indexes?

    - by Ash
    I have an app, which cycles through a huge number of records in a database table and performs a number of SQL and .Net operations on records within that database (currently I am using Castle.ActiveRecord on PostgreSQL). I added some basic btree indexes on a couple of the feilds, and as you would expect, the peformance of the SQL operations increased substantially. Wanting to make the most of dbms performance I want to make some better educated choices about what I should index on all my projects. I understand that there is a detrement to performance when doing inserts (as the database needs to update the index, as well as the data), but what suggestions and best practices should I consider with creating database indexes? How do I best select the feilds/combination of fields for a set of database indexes (rules of thumb)? Also, how do I best select which index to use as a clustered index? And when it comes to the access method, under what conditions should I use a btree over a hash or a gist or a gin (what are they anyway?).

    Read the article

  • Tools for analyzing performance of SQL Server/Express?

    - by Adam Crossland
    The application that I have customized and continue to support for my client is seeing dramatic performance problems in the field. Simple queries on rather small datasets take over a minute when I would expect them to complete with sub-second times. My current theory is that SQL Server Express 2005 is too limited for the rather non-trivial demands being made of it, but I am not sure how to get about gathering data that I can use to either prove my point or allow me to move on to finding another cause. Can anyone point me toward some tools that would allow me to analyze the load on this database? Information such as simultaneous connections, execution times of individual queries, memory usage, heck just any profiling data at all would be a help. Many thanks.

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

  • PostgreSQL: Why does this simple query not use the index?

    - by David
    I have a table t with a column c, which is an int and has a btree index on it. Why does the following query not utilize this index? explain select c from t group by c; The result I get is: HashAggregate (cost=1005817.55..1005817.71 rows=16 width=4) -> Seq Scan on t (cost=0.00..946059.84 rows=23903084 width=4) My understanding of indexes is limited, but I thought such queries were the purpose of indexes.

    Read the article

  • jQuery Tools - Range Input with Mousewheel support

    - by pspahn
    I've got a nice looking horizontal scroller that uses the Range Input feature of jQ Tools. Mouse wheel scrolling support would be awesome, and I'm sure it can work, just not sure how to get it set up. I've got the generic setup: var scroll = jQuery('#scroll'); jQuery(':range').rangeinput({ speed: 0, progress: true, onSlide: function(ev, step) { scroll.css({left: -step}); }, change: function(e, i) { scroll.animate({left: -i}, "fast"); } }); The doc page ( http://jquerytools.org/documentation/toolbox/mousewheel.html ) for mousewheel gives an example: $("#myelement").mousewheel(function(event, delta) { }); But I'm not sure how this hooks in. Other tools in the package simply allow the option: mousewheel: true Unfortunately that doesn't work on range input. Any advice appreciated. Thank you.

    Read the article

  • postgresql duplicate table names best practice

    - by veilig
    My company has a handful of apps that we deploy in the websites we build. Recently a very old app needed to be included along side a newer app and there was a conflict w/ a duplicate table name needed to be used by both apps. We are now in the process of updating an old app and there will be some DB updates. I'm curious what people consider best practice (or how do you do it) to help ensure these name collisions don't happen. I've looked at schema's but not sure if thats the right path we want to take. As the documentation prescribes, I don't want to "wire" a particular schema name into an application and if I add schema's to the user search path how would it know which table I was referring to if two schema's have the same table name. although, maybe I'm reading to much into this. Any insights or words of wisdom would be greatly appreciated!

    Read the article

  • Using a user-defined type as a primary key

    - by Chris Kaminski
    Suppose I have a system where I have metadata such as: table: ====== key name address ... Then suppose I have a user-defined type described as so: datasource datasource-key A) are there systems where it's possible to have keys based on user-defined types? B) if so, how do you decompose the keys into a form suitable for querying? C) is this a case where I'm just better off with a composite primary key?

    Read the article

  • rake db:migrate and rake db:create both work on test database, not development database

    - by geography_guy
    I am new to Stack Overflow and Ruby on Rails. My problem is, when I run the command rake db:create or rake db:migrate, the test database is affected, but the development database is not. rails (3.2.2) my database.yml: # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: &test adapter: postgresql encoding: unicode database: ticketee_test pool: 5 username: ticketee password: my_password_here development: adapter: postgresql encoding: unicode database: ticketee_development pool: 5 username: ticketee password: my_password_here production: adapter: postgresql encoding: unicode database: ticketee_production pool: 5 username: ticketee password: my_password_here cucumber: <<: *test

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • SQL SERVER – Fastest Way to Restore the Database

    - by pinaldave
    A few days ago, I received following email: “Pinal, We are in an emergency situation. We have a large database of around 80+ GB and its backup is of 50+ GB in size. We need to restore this database ASAP and use it; however, restoring the database takes forever. Do you think a compressed backup would solve our problem? Any other ideas you got?” First of all, the asker has already answered his own question. Yes; I have seen that if you are using a compressed backup, it takes lesser time when you try to restore a database. I have previously blogged about the same subject. Here are the links to those blog posts: SQL SERVER – Data and Page Compressions – Data Storage and IO Improvement SQL SERVER – 2008 – Introduction to Row Compression SQL SERVER – 2008 – Introduction to New Feature of Backup Compression However, if your database is very large that it still takes a few minutes to restore the database even though you use any of the features listed above, then it will really take some time to restore the database. If there is urgency and there is no time you can spare for restoring the database, then you can use the wonderful tool developed by Idera called virtual database. This tool restores a certain database in just a few seconds so it will readily be available for usage. I have in depth written my experience with this tool in the article here SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual database. Let me know your experience in this scenario. Have you ever needed your database backup restored very quickly, what did you do in that scenario. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Sitemaps - do I need to submit each sitemap in sitemap_index.xml to Google Webmaster tools?

    - by iSumitG
    I am having a Wordpress blog on my CentOS server. There is no sitemap.xml in the root directory but there is sitemap_index.xml file in the root directory which contains the following code: <?xml-stylesheet type="text/xsl" href="http://mywebsite.com/wp-content/plugins/wordpress-seo/css/xml-sitemap-xsl.php"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc> http://mywebsite.com/post-sitemap.xml </loc> <lastmod> 2012-12-18T19:47:47+00:00 </lastmod> </sitemap> <sitemap> <loc> http://mywebsite.com/page-sitemap.xml </loc> <lastmod> 2012-12-18T17:32:49+00:00 </lastmod> </sitemap> </sitemapindex> My question: Which sitemap should I submit to Google Webmasters Tools? Options are: Only sitemap_index.xml Only post-sitemap.xml and page-sitemap.xml All 3 (sitemap_index.xml, post-sitemap.xml and page-sitemap.xml) Any other, please let me know.

    Read the article

  • I'm trying to install VMWare tools on Ubuntu 12.04.2 LTS and I seem to have a problem with Kernel headers

    - by Pedro Irusta
    I have Ubuntu 12.04.2 LTS installed on a VMware machine on Windows 7 host. I seem to have a problem with Kernel headers when trying to install them I did: sudo apt-get install gcc make build-essential linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done gcc is already the newest version. build-essential is already the newest version. linux-headers-3.5.0-28-generic is already the newest version. make is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 100 not upgraded. However, when installing VMware tools I get the following error: make[1]: Entering directory `/usr/src/linux-headers-3.5.0-28-generic' CC [M] /tmp/vmware-root/modules/vmhgfs-only/backdoor.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/backdoorGcc32.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/bdhandler.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/cpName.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/cpNameLinux.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/cpNameLite.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/dentry.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/dir.o CC [M] /tmp/vmware-root/modules/vmhgfs-only/file.o /tmp/vmware-root/modules/vmhgfs-only/file.c:122:4: warning: initialization from incompatible pointer type [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/file.c:122:4: warning: (near initialization for ‘HgfsFileFileOperations.fsync’) [enabled by default] CC [M] /tmp/vmware-root/modules/vmhgfs-only/filesystem.o /tmp/vmware-root/modules/vmhgfs-only/filesystem.c:48:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/filesystem.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-28-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' Any help appreciated!

    Read the article

  • Need help in determing what, if any, tools can be used to create a free Flash game

    - by ReaperOscuro
    Yes I proudly -and sadly- declare that I am a complete nincompoop when it comes to Flash, and I have been fishing around the big wide web for information. The reason for this is that I have been contracted to create a game(s) for a website -the usual flash-based games caveat. Please I do not mean things like by those gaming generator websites, I mean small yet professional games- but the caveat, as always, is that impossible dream: it needs to be done all for free. The budget...well imagine it as not there. Annoyingly is that I am a game designer yes, but with a ridiculously tight deadline I haven't got much time to re-learn (ah the heady days of programming at uni) everything by the end of March, so I'd like to ask some people who know their stuff rather than keep looking at a gazillion different things. This is my understanding: with the flash sdk you can create a game, albeit you need to be pretty programming savvy. FlashDevelop helps there -yet I am not entirely sure how. Yet even FD says to use Flash for the animation/graphics. Yes its undeniably powerful but as I said there is the unattainable demand of no money. The million dollar question: what, if any, tools can I use to create a free flash game?

    Read the article

  • How Many Google +1's Does a Website need in order for Google WebMaster's Tools to Show Characteristics

    - by Asaph
    I have added the Google +1 Button to my website and discovered the new Social Activity section in Google WebMaster's Tools. Apparently, one of the interesting things you can learn about your audience is demographic data. But in GWT, the Social Activity Audience section for my site (currently 82 +1's), says the following: Your site doesn’t have enough +1's yet to show characteristics But I'm not sure how many +1's is enough. Google's official help page for the Audience section offers little insight: The Audience page displays information about people who have +1'd your pages, including the total number of unique users, their location, and their age and gender. All information is anonymized; Google doesn't share personal information about people who have +1’d your pages. To protect privacy, Google won't display age, gender, or location data unless a certain minimum number of people have +1'd your content. But what is that "certain minimum number"? I've tried Googling this but all I could find to date was this page which doesn't answer the question. So how many +1's does a site need before GWT will show me audience demographic characteristics?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • 250 k 404 & 410 errors in Webmaster Tools. Bad backlinks?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these urls: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this url. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out urls. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these urls, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them? Any help will be higly appreciated,

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • When is it ever ok to write your own development tools? (editor into IDE)

    - by mario
    So I'm foremost using a text editor for coding. It's a very bare bones editor; provides mostly just syntax highlighting. But on rare occasions I also need to debug something. And that's when I have to resort to an IDE (mostly Netbeans, but got fiddly Eclipse/Aptana working as second fallback). For general use however IDEs feel not workable to me. It's a visual thing, being used to console UIs etc. And switching back and forth between a text editor and an IDE is slightly cumbersome too. That's why I'm considering extending the editor, not really into a full-fledged IDE - but at the very least integrate a debug feature. Since I'm working on PHP, it seems not that much effort. The DBGp allows to externalize a debug handler from the editor, so it's just minor integration work and figuring out how to shoehorn a breakpoint feature into the editor (joe btw). And while I've also got time to do that, I'm wondering if this is really worthwhile. In this case it's not a needed development tool. It's just for convenience. And the cause for doing it is basically just not liking the existing solution. While over time I might extend and adapt this debugger thing, it initially will be as circumstantial as Eclipse. It inevitably starts out as poor development tool. Furthermore there is likely not much reuse. (Okay, this is not an important point. Most such software exists sans much of a use case. And also obviously, similar extensions already exist for emacs and vim, so it cannot be completely pointless.) But what's a general guideline on attempting to conoct custom development tools, particularily if they are not really needed but satisfy personal preferences? (Usability enhancement not certain.)

    Read the article

  • Java Magazine: Developer Tools and More

    - by Tori Wieldt
    The May/June issue of Java Magazine explores the tools and techniques that can help you bring your ideas to fruition and make you more productive. In “Seven Open Source Tools for Java Deployment,” Bruno Souza and Edson Yanaga present a set of tools that you can use now to drastically improve the deployment process on projects big or small—enabling you and your team to focus on building better and more-innovative software in a less stressful environment. We explore the future of application development tools at Oracle in our interview with Oracle’s Chris Tonas, who discusses plans for NetBeans IDE 9, Oracle’s support for Eclipse, and key trends in the software development space. For more on NetBeans IDE, don’t miss “Quick and Easy Conversion to Java SE 8 with NetBeans IDE 8” and “Build with NetBeans IDE, Deploy to Oracle Java Cloud Service.” We also give you insight into Scrum, an iterative and incremental agile process, with a tour of a development team’s Scrum sprint. Find out if Scrum will work for your team. Other article topics include mastering binaries in Maven-based projects, creating sophisticated applications with HTML5 and JSF, and learning to program with BlueJ. At the end of the day, tools don’t make great code—you do. What tools are vital to your development process? How are you innovating today? Let us know. Send a tweet to @oraclejavamag. The next big thing is always just around the corner—maybe it’s even an idea that’s percolating in *your* brain. Get started today with this issue of Java Magazine. Java Magazine is a FREE, bi-monthly, online publication. It includes technical articles on the Java language and platform; Java innovations and innovators; JUG and JCP news; Java events; links to online Java communities; and videos and multimedia demos. Subscriptions are free, registration required.

    Read the article

  • Why won't Webmaster Tools let me set a preferred domain? (says to verify, but it should already be)

    - by Su'
    I've got a domain that I apparently forgot to set a preferred domain for, so I just tried to do it. Webmaster Tools instead popped up a little box: Part of the process of setting a preferred domain is to verify that you own http://www.example.com/. Please verify http://www.example.com/. I'm running into some problems following these instructions: As far as I can tell I already did verify sometime in the past. There's a TXT DNS record with the gibberish Google tells you to set for it that I couldn't have come up with myself. …and nothing is telling me this information is bad. So let's assume the site is somehow not actually verified. All the various methods' instructions start with "click the Manage Site button next to the site you want, and then click Verify this site." That button doesn't even exist on my screens. (It presumably goes away when you successfully verify?) Those instructions were all updated pretty recently, and the DNS one in particular just a couple weeks ago so it seems a bit unlikely they're inaccurate. I am not using Apps, and won't be, so can't try out the verification through there. Note that I also have another domain that I have not done any verification for which is showing the same behavior(no such button, being told to verify when it seems impossible etc.) so something appears to just be broken. I already have a no-www process in place server-side, so we can skip that. I'm just trying to get the box checked off in GWT. If I don't get any resolution, I'll eventually scrap the TXT record and see if the site gets un-verified(or whatever since it thinks it isn't), and see if I can just restart the process. It's not urgent so I'm just trying to figure out if I've gone blind to something or what. Did the button move?

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >