Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 76/1209 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • SQL SERVER – Server Side Paging in SQL Server 2011 – Part2

    - by pinaldave
    The best part of the having blog is that SQL Community helps to keep it running with new ideas. Earlier I wrote about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. A very popular article on that subject. I had used variables for “number of the rows” and “number of the pages”. Blog reader send me email asking in their organizations these values are stored in the table. Is there any the new syntax can read the data from the table. Absolutely YES! USE AdventureWorks2008R2 GO CREATE TABLE PagingSetting (RowsPerPage INT, PageNumber INT) INSERT INTO PagingSetting (RowsPerPage, PageNumber) VALUES(10,5) GO SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET (SELECT RowsPerPage*PageNumber FROM PagingSetting) ROWS FETCH NEXT (SELECT RowsPerPage FROM PagingSetting) ROWS ONLY GO Here is the quick script: This is really an easy trick. I also wrote blog post on comparison of the performance over here: . SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Paging

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Oracle Linux Delivers Top CPU Benchmark Results on Sun Blades

    - by sergio.leunissen
    From the Performance and Best Practices blog: Fresh SPEC CPU2006 results for Sun Blade X6275 M2 Server Modules running Oracle Linux 5.5. The highlights: The dual-node Sun Blade X6275 M2 server module, equipped with two Intel Xeon X5670 2.93 GHz processors per node and running the Oracle Enterprise Linux 5.5 operating system delivered the best SPECint_rate2006 and SPECfp_rate2006 benchmark results for all systems with Intel Xeon processor 5000 sequence. With a SPECint_rate2006 benchmark result of 679, the Sun Blade X6275 M2 server module, with two compute nodes per blade, delivers maximum performance for space constrained environments. Comparing Oracle's dual-node blade to HP's dual-node blade server, based on their single node performance, the Sun Blade X6275 M2 server module SPECfp_rate2006 score of 241 outperforms the best published HP ProLiant BL2X220c G5 server score by 3.2x. A single node of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 37% improvement in SPECint_rate2006 benchmark results and 22% improvement in SPECfp_rate2006 benchmark results compared to the previous generation Sun Blade X6275 server module. Both nodes of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 59% improvement on the SPECint_rate2006 benchmark and 40% improvement on the SPECfp_rate2006 benchmark compared to the previous generation Sun Blade X6275 server module.

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – GUID vs INT – Your Opinion

    - by pinaldave
    I think the title is clear what I am going to write in your post. This is age old problem and I want to compile the list stating advantages and disadvantages of using GUID and INT as a Primary Key or Clustered Index or Both (the usual case). Let me start a list by suggesting one advantage and one disadvantage in each case. INT Advantage: Numeric values (and specifically integers) are better for performance when used in joins, indexes and conditions. Numeric values are easier to understand for application users if they are displayed. Disadvantage: If your table is large, it is quite possible it will run out of it and after some numeric value there will be no additional identity to use. GUID Advantage: Unique across the server. Disadvantage: String values are not as optimal as integer values for performance when used in joins, indexes and conditions. More storage space is required than INT. Please note that I am looking to create list of all the generic comparisons. There can be special cases where the stated information is incorrect, feel free to comment on the same. Please leave your opinion and advice in comment section. I will combine a final list and update this blog after a week. By listing your name in post, I will also give due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Data Storage, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Cleaning Up SQL Server Indexes – Defragmentation, Fillfactor – Video

    - by pinaldave
    Storing data non-contiguously on disk is known as fragmentation. Before learning to eliminate fragmentation, you should have a clear understanding of the types of fragmentation. When records are stored non-contiguously inside the page, then it is called internal fragmentation. When on disk, the physical storage of pages and extents is not contiguous. We can get both types of fragmentation using the DMV: sys.dm_db_index_physical_stats. Here is the generic advice for reducing the fragmentation. If avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEX REORGANIZE: This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level pages of the index in a logical order. As this is an online operation, the index is available while the statement is running. If avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD: This is replacement for DBCC DBREINDEX to rebuild the index online or offline. In such case, we can also use the drop and re-create index method.(Ref: MSDN) Here is quick video which covers many of the above mentioned topics. While Vinod and I were planning about Indexing course, we had plenty of fun and learning. We often recording few of our statement and just left it aside. Afterwords we thought it will be really funny Here is funny video shot by Vinod and Myself on the same subject: Here is the link to the SQL Server Performance:  Indexing Basics. Here is the additional reading material on the same subject: SQL SERVER – Fragmentation – Detect Fragmentation and Eliminate Fragmentation SQL SERVER – 2005 – Display Fragmentation Information of Data and Indexes of Database Table SQL SERVER – De-fragmentation of Database at Operating System to Improve Performance Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • How do you display a list of images, from a folder on hard drive, on ASP.NET website?

    - by Jordan S
    I am trying to make a simple photo gallery website. Using ASP.NET and C#. Right now I don't have a server set up but I am just using the development one that Visual Studio Starts when you make a website project and run it. I have a folder on my hard drive that contains an unknown number of images. I want to write a piece of code that will go through each image and add them to the default webpage. I have tried this code but it doesn't work. What am I doing wrong? Should I be using a ListView control or a DataView or something like that? Do I need to add a virtual directory in order to access the images? If so, how do I to that on this test server? ALSO, how do I set the position and alignment of these pictures? For example, how would I make it so that the pictures are in a line vertically and centered on the webpage? protected void Page_Load(object sender, EventArgs e) { string[] filesindirectory = Directory.GetFiles(@"C:\Users\Jordan\Desktop\Web Images"); int i = 1; foreach (string s in filesindirectory) { Image img = new Image(); img.ID = "image" + i.ToString(); img.ImageUrl = s; img.Visible = true; Page.Controls.Add(img); i++; } }

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Embed Google’s Pac Man Game On Your Website

    - by Gopinath
    Google is celebrating the 30th anniversary of Pac-Man with a playable Pac Man game doodle on it’s home page. You can play the full game(255 levels) at http://google.com. This is the first time ever Google released an interactive doodle. How To Embed the Pac Man Game In Your Web Pages? I’m surprised to see this game being a non-flash version and it seems to be a pure javascript + html script. Michael at RustyBricks.com published an unofficial way of embedding Google’s Pac Man game in any website along with a link to demo page. Check out How To Get Google’s Pac Man Game On Your Page for a quick script to have this game for your website users. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Integrating Twitter Into An ASP.NET Website Using OAuth

    Earlier this year I wrote an article about <a href="http://www.twitterizer.net/">Twitterizer</a>, an open-source .NET library that can be used to integrate your application with <a href="http://twitter.com/">Twitter</a>. Using Twitterizer you can allow your visitors to post tweets, view their timeline, and much more, all without leaving your website. The original article, <a href="http://www.4guysfromrolla.com/articles/021710-1.aspx">Integrating Twitter Into An ASP.NET Website</a>, showed how to post tweets and view a timeline to a particular Twitter account using Twitterizer 1.0. To post a tweet to a specific account, Twitterizer 1.0 uses <i>basic authentication</i>. Basic authentication is a very simple

    Read the article

  • Residential Care Homes website launched based on the Umbraco CMS

    - by Vizioz Limited
    This week we have launched a new website for a local company called Ascot Residential Homes. They run two homes for the elderly and for those who suffer with dementia. Before we built the site for Ascot Residential Homes they asked us to visit the homes to get a feeling about how special they are. If you are looking for a home for an elderly relative I would highly recommend having a look at their website and if you are in the area arranging to visit them.If you look on our site you will also find an Ascot Residential Homes case study in our Umbraco case studies section.

    Read the article

  • How to Use Firefox’s Web Developer Tools to View Website Structures in 3D

    - by Chris Hoffman
    Firefox 11 added two new web developer tools to Firefox’s already impressive arsenal. The Tilt feature visualizes website structures in 3D, while the Style Editor can edit CSS stylesheets on the fly. The 3D feature, known as Tilt, is a way of visualizing a website’s DOM. It integrates with the existing Document Inspector and uses WebGL to display rich 3D graphics in your browser. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • SEO techniques for a complete Flex Website

    - by Bobby Francis Joseph
    I am planning to build a website completely in Flex. All the contents will be static. No DB will be used. Unfortunately I am not building the website for PUMA or NIKE and so SEO is important. There is an overwhelming and confusing information out there about Flex and SEO. The following is a piece of information I found on the web " FLEX( Flash ) uses XML as a primary source of content, and XHTML is just a custom XML. The idea is to to use the HTML pages as XML content for the FLEX( Flash ) application. The XML can be read and indexed by the search engines, and it’s also the ideal content source for your FLEX( Flash ) application.' It goes on to explain how this can be done. Is this really that simple. " Could someone give some credible links. SEO is important for me since I am planning to build the site for a resort.

    Read the article

  • MySQL server with website hosting with managed hosting

    - by BlackSheep
    I work for a small business, and I am the "IT Department". I also happen to be a summer intern, so after I leave the number of IT staff will be 0 for an indefinite period of time. I need a place where I can reliably put up the company's website as well as the mySQL+PHP backend. Good redundancy is a plus, as well as easy administration for my IT-challenged colleagues. Managed hosting would be good, so the PHP versions can update without my company having to hire an admin. EDIT: The company already has a mySQL+PHP server running locally which hosts the existing website. My assignment is to find a remote server where the latest versions of mySQL+PHP will be maintained and where there is a very small chance of unintended downtime. Can you recommend anything?

    Read the article

  • What Contents in a Young Programmer's Personal Website

    - by DotNetStudent
    I recently stumbled upon this question in which the contents a professional programmer's website should have were discussed and I agree with most of the answers there. However, I am by no means a professional programmer (just came out from university) and so I am a bit lost in what concerns the contents I should provide in the personal website I am designing for myself now. I do have a pretty nice job at a fast-growing software company but I would really like to present myself to the outside world in a nice but humble manner since my curriculum is by no means a long one. Any ideas?

    Read the article

  • The sharp decline Statistics of website

    - by Erfan Safarpoor
    My website has had 10 months ago, the statistics are very high. Very high ... But after 10 days of server failure, Marm was 20 times less. I got lost for a long time without making a mistake, do ... I am the source of links that they've hired a writer to pen the final results are seen. But a strange thing: Approximately every two months and was hit again 20 more times and then low again after 10 days! my website url : www.sooran.com (food.sooran.com)

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >