Search Results

Search found 2042 results on 82 pages for 'average'.

Page 20/82 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Link tags in iframe widget

    - by john Smith
    I have a rating community-site and I´m offering little iframe widgets with the average rating and some little other info. Does it make sense (for visibility, SEO) to add link tags to the head like: <link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="rssfeed" /> <link rel="index" title="main-profile" href="main-profile"> To get a logical association of the widget to relating pages? How would you do this?

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • WebCenter Customer Spotlight: Guizhou Power Grid Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGuizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. The business objectives were to consolidate information contained in disparate systems into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Guizhou Power Grid Company saved more than US$693,000 in storage costs, reduced  average search times from 180 seconds to 5 seconds and solved 80% to 90% of technology and maintenance issues by searching the Oracle WebCenter Content management system. Company OverviewA wholly owned subsidiary of China Southern Power Grid Company Limited, Guizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. Business ChallengesThe business objectives were to consolidate information contained in disparate systems, such as the customer relationship management and power grid management systems, into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Solution DeployedGuizhou Power Grid Company  implemented Oracle WebCenter Content to build a content management system that enabled the secure, integrated management and storage of information, such as documents, records, images, Web content, and digital assets. The content management solution was integrated with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site. Business Results Saved more than US$693,000 in storage costs and shortened the material distribution time by integrating the knowledge management solution with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site Enabled staff to search 31,650 documents using catalogs, multidimensional attributes, and knowledge maps, reducing average search times from 180 seconds to 5 seconds and saving approximately 1,539 hours in annual search time Gained comprehensive document management, format transformation, security, and auditing capabilities Enabled users to upload new documents and supervisors to check the accuracy of these documents online, resulting in improved information quality control Solved 80% to 90% of technology and maintenance issues by searching the Oracle content management system for information, ensuring IT staff can respond quickly to users’ technical problems Improved security by using role-based access controls to restrict access to confidential documents and information Supported the efficient classification of corporate knowledge by using Oracle’s metadata functions to collect, tag, and archive documents, images, Web content, and digital assets “We chose Oracle WebCenter Content, as it is an outstanding integrated content management platform. It has allowed us to establish a system to access, query, share, manage, and store our corporate assets. This has laid a solid foundation for Guizhou Power Grid Company to improve management practices.” Luo Sixi, Senior Information Consultant, Guizhou Power Grid Company Additional Information Guizhou Power Grid Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • Impatient Customers Make Flawless Service Mission Critical for Midsize Companies

    - by Richard Lefebvre
    At times, I can be an impatient customer. But I’m not alone. Research by The Social Habit shows that among customers who contact a brand, product, or company through social media for support, 32% expect a response within 30 minutes and 42% expect a response within 60 minutes! 70% of respondents to another study expected their complaints to be addressed within 24 hours, irrespective of how they contacted the company. I was intrigued when I read a recent blog post by David Vap, Group Vice President of Product Development for Oracle Service Cloud. It’s about “Three Secrets to Innovation” in customer service. In David’s words: 1) Focus on making what’s hard simple 2) Solve real problems for real people 3) Don’t just spin a good vision. Do something about it  I believe midsize companies have a leg up in delivering on these three points, mainly because they have no other choice. How can you grow a business without listening to your customers and providing flawless service? Big companies are often weighed down by customer service practices that have been churning in bureaucracy for years or even decades. When the all-in-one printer/fax/scanner I bought my wife for Christmas (call me a romantic) failed after sixty days, I wasted hours of my time navigating the big brand manufacturer’s complex support and contact policies only to be offered a refurbished replacement after I shipped mine back to them. There was not a happy ending. Let's just say my wife still doesn't have a printer.  Young midsize companies need to innovate to grow. Established midsize company brands need to innovate to survive and reach the next level. Midsize Customer Case Study: The Boston Globe The Boston Globe, established in 1872 and the winner of 22 Pulitzer Prizes, is fighting the prevailing decline in the newspaper industry. Businessman John Henry invested in the Globe in 2013 because he, “…believes deeply in the future of this great community, and the Globe should play a vital role in determining that future”. How well the paper executes on its bold new strategy is truly mission critical—a matter of life or death for an industry icon. This customer case study tells how Oracle’s Service Cloud is helping The Boston Globe “do something about” and not just “spin” it’s strategy and vision via improved customer service. For example, Oracle RightNow Chat Cloud Service is now the preferred support channel for its online environments. The average e-mail or phone call can take three to four minutes to complete while the average chat is only 30 to 40 seconds. It’s a great example of one company leveraging technology to make things simpler to solve real problems for real people. Related: Oracle Cloud Service a leader in The Forrester Wave™: Customer Service Solutions For Small And Midsize Teams, Q2 2014

    Read the article

  • Wha is an acceptable level of FPS in browser workslow editor?

    - by Theo Walcott
    I'm developing a diagraming tool and need some metrics to test it against. Unfortunately I couldn't find information regarding an average acceptable FPS level for this kind of web apps. We all know such levels for action games (which is 60fps minimum), 25fps for videostreaming. Can anyone give me some information reagarding minimal FPS level for drawing web apps? What tools would you recomend to test my app?

    Read the article

  • Do programmers have a higher IQ? [closed]

    - by Laurent Pireyn
    Do programmers have a higher intellectual quotient than the average 100? Has anybody conducted studies on that topic? Don't get me wrong! I consider IQ as a limited measure that only evaluates the analytical part of one's intelligence. Furthermore, I think that intelligence is only one among many characteristics, and that it should not be used to judge or discriminate people. My question should be read in that context.

    Read the article

  • Future of Website Development

    In a country like India, the Internet industry has at last come of age. From just being exclusive to the section of people who needed to know HTML coding and web development scripts, it has now become something so simple and easy that any average guy can accomplish it with just the proper software.

    Read the article

  • Do Online Businesses Really Need High Bandwidth Hosting?

    Irrespective of the amount of information available on the internet it still feels like the need for more to us. People say that there is no distinction between a upcoming businessman and an already victorious businessman in terms of having complete knowledge of ones business. It is the most difficult decision for a new entrepreneur to make as to going in for a high bandwidth hosting or an average hosting plan.

    Read the article

  • Introducing SEO

    Gone are the days when having your own website was a prestigious matter for any company. Now on an average 9 out of 10 companies have their websites. So, online sale has a lot more than just running your own web portal.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • What is an acceptable level of FPS in browser workslow editor?

    - by Theo Walcott
    I'm developing a diagraming tool and need some metrics to test it against. Unfortunately I couldn't find information regarding an average acceptable FPS level for this kind of web apps. We all know such levels for action games (which is 60fps minimum), 25fps for videostreaming. Can anyone give me some information reagarding minimal FPS level for drawing web apps? What tools would you recomend to test my app?

    Read the article

  • IBM Reinvents x86 Platform with eX5 Servers

    The amount of data involved in the average Web-based workload today doubles every year, increasing costs and straining IT resources. The traditional response to this dilemma from IT organizations is to throw more servers at the problem, which furthers server sprawl and increases power and management costs. As a result, the typical x86 server is only running at 10 percent utilization.

    Read the article

  • IBM Reinvents x86 Platform with eX5 Servers

    The amount of data involved in the average Web-based workload today doubles every year, increasing costs and straining IT resources. The traditional response to this dilemma from IT organizations is to throw more servers at the problem, which furthers server sprawl and increases power and management costs. As a result, the typical x86 server is only running at 10 percent utilization.

    Read the article

  • Moms on Mobile: Are They Way Ahead of You?

    - by Mike Stiles
    You may have no idea how much and how fast moms are embracing mobile. Of all the demographics that can be targeted by marketers, moms have always been at or near the top of the list. And why not? They’re running households, they’re all over town, they’re making buying decisions, and they’re influencing family and friends. They, out of necessity, become masters of efficiency and time management. So when a technology tool, like mobile, comes along that assists with that efficiency and time management, we would obviously expect them to take advantage of it. So if it’s obvious, why are so many big, sophisticated brands left choking on the dust of moms who have zoomed past them in the adoption of mobile, and social on mobile? Let’s break down some hard truths as presented by a Mojiava report: -Moms spend 6.1 hours per day on average on their smartphones – more than magazines, TV or radio. -46% took action after seeing a mobile ad. -51% self-identify as “addicted” to their smartphone. -Households with an income of $25K-$50K have about the same mobile penetration among moms as those with incomes of $50K-$75K. So mobile is regarded as a necessity for middle-class moms. -Even moms without smartphones spend 2.5 hours on average per day on some connected mobile device. -Of moms with such devices, 9.8% have an iPad, 9.5% a Kindle and 5.7% an iPod Touch. -Of tablet-owning moms, 97% bought something using their tablet in the last month. -31% spend over 10 hours per week on their tablet, but less than 2 hours per week on their PCs. -62% of connected moms use shopping apps. -46% want to get info on their mobile while in a store. -Half of connected moms use social on their mobile. And they’re engaged. 81% are brand fans, 86% post updates, and 84% comment. If women and moms are one of your primary targets and you find yourself with no strong social channels where content is driving engagement and relationship-building, with sites not optimized for mobile, or with no tablet or smartphone apps, you have been solidly left behind by your customers and prospects. And their adoption of mobile and social on mobile is only exponentially speeding up, not slowing down. How much sense does it make when your customer is ready to act on your mobile ad, wants to user your iPad app to buy something from you, wants to be your fan on Facebook, wants to get messages and deals from you while they’re in your store…but you’re completely absent? I’ll help you cheat on the test by giving you the answer…no sense at all. Catch up to momma.

    Read the article

  • Efficient Website Design Brings Constructive Traffic

    There is nothing more important on the World Wide Web other than a good website design for creating a lucrative presence on Internet. Well designed websites with focused content always help in attracting the average visitor or targeted web surfer to evaluate your products or services and eventually convert to a loyal and interactive customer.

    Read the article

  • Is sqlite3 faster than MySQL on shared hosting?

    - by Osvaldo
    Can sqlite3 be faster than MySQL on shared hosting and small to average websites (less than 500 visitors a day). I have an account in a popular shared hosting provider and I've noticed that it has become quite slow redering pages. My doubt is that this may happen because the MySQL server is overloaded. Some CMS'es work fine with SQLlite too, so I was wandering if I should use SQLite for the new sites instead of MySQL.

    Read the article

  • What's the relation between website's traffic and Google Adsense revenue?

    - by user1592845
    Are there some relations between the website's daily traffic and Google's Adsense revenue? In other word, Suppose the same Ad. will be published on two different websites, the first has average daily traffic 2000 visits while the other has only 100 visits. Does one click on that ad. on the first website will make revenue more than the second website? I've got misunderstand with Google documentation and I need to make a clear idea about this subject.

    Read the article

  • Is the TCP protocol good enough for real-time multiplayer games?

    - by kevin42
    Back in the day, TCP connections over dialup/ISDN/slow broadband resulted in choppy, laggy games because a single dropped packet resulted in a resync. That meant a lot of game developers had to implement their own reliability layer on top of UDP, or they used UDP for messages that could be dropped or received out of order, and used a parallel TCP connection for information that must be reliable. Given the average user has faster network connections now, can a real time game such as an FPS give good performance over a TCP connection?

    Read the article

  • How would one build a relational database on a key-value store, a-la Berkeley DB's SQL interface?

    - by coleifer
    I've been checking out Berkeley DB and was impressed to find that it supported a SQL interface that is "nearly identical" to SQLite. http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbsqlbasics.html#identicalusage I'm very curious, at a high-level, how this kind of interface might have been architected. For instance: since values are "transparent", how do you efficiently query and sort by value how are limits and offsets performed efficiently on large result sets how would the keys be structured and serialized for good average-case performance

    Read the article

  • Manipulating XML Data in SQL Server

    When the average database developer is obliged to manipulate XML, either shredding it into relational format, or creating it from SQL, it is often done 'at arms length'. A shame, since effective use of techniques that go beyond the basics can save much code, "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • Botnet Malware Sleeps Eight Months Activation, Child Concerns

    Daily Safety Check experts used a computer forensic analysis of a significant botnet that consisted of Carberp and SpyEye malware to come up with the details for their report. The analysis found that the botnet profiled the behavior of the slave computers it infected, similar to surveillance techniques used by law enforcement agencies, for an average of eight months. During the eight months, the botnet analyzed each computer's users and assigned ratings to certain activities to form a complete profile for each. Doing so allowed those behind the scheme to determine which were the most favora...

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >