Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 51/1944 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Make The Web Fast - The HAR Show: Capturing and Analyzing performance data with HTTP Archive format

    Make The Web Fast - The HAR Show: Capturing and Analyzing performance data with HTTP Archive format Need a flexible format to record, export, and analyze network performance data? Well, that's exactly what the HTTP Archive format (HAR) is designed to do! Even better, did you know that Chrome DevTools supports it? In this episode we'll take a deep dive into the format (as you'll see, its very simple), and explore the many different ways it can help you capture and analyze your sites performance. Join +Ilya Grigorik and +Peter Lubbers to find out how to capture HAR network traces in Chrome, visualize the data via an online tool, share the reports with your clients and coworkers, automate the logging and capture of HAR data for your build scripts, and even adapt it to server-side analysis use cases! Yes, a rapid fire session of awesome demos - see you there. From: GoogleDevelopers Views: 0 6 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • Performance tuning of tabular data models in Analysis Services

    - by Greg Low
    More and more practical information around working with tabular data models is starting to appear as more and more sites get deployed.At SQL Down Under, we've already helped quite a few customers move to tabular data models in Analysis Services and have started to collect quite a bit of information on what works well (and what doesn't) in terms of performance of these models. We've also been running a lot of training on tabular data models.It was great to see a whitepaper on the performance of these models released today.Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services was written by John Sirmon, Greg Galloway, Cindy Gross and Karan Gulati. You'll find it here: http://msdn.microsoft.com/en-us/library/dn393915.aspx

    Read the article

  • Gartner: Magic Quadrant for Corporate Performance Management Suites, 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Hyperion clearly leads the pack again in Gartner’s analysis of the CPM / EPM market, saying; “Oracle is a Leader in CPM suites, with one of the most widely distributed solutions in the market. Oracle Hyperion Enterprise Performance Management is recognized by CFOs worldwide. The vendor has a well-established partner channel, with both large and smaller CPM SI specialists. Hyperion skills are also plentiful among the independent consultant community, given the well-established products. “ “Oracle continues to innovate, bringing incremental improvements across the portfolio as well as new financial close management, disclosure management and predictive planning additions. Furthermore, Oracle has improved integration of Hyperion with the Oracle BI platform, and has improved planning performance, enabling Hyperion Planning to use Oracle Exalytics In-Memory Machine.” For the full article see here: Gartner: Magic Quadrant for Corporate Performance Management Suites, 2012 And if you missed it, here is also the MQ for BI: Gartner: Magic Quadrant for Business Intelligence Platforms, 2012

    Read the article

  • Common Javascript mistakes that severely affect performance?

    - by melee
    At a recent UI/UX MeetUp that I attended, I gave some feedback on a website that used Javascript (jQuery) for its interaction and UI - it was fairly simple animations and manipulation, but the performance on a decent computer was horrific. It actually reminded me of a lot of sites/programs that I've seen with the same issue, where certain actions just absolutely destroy performance. It is mostly in (or at least more noticeable in) situations where Javascript is almost serving as a Flash replacement. This is in stark contrast to some of the webapps that I have used that have far more Javascript and functionality but run very smoothly (COGNOS by IBM is one I can think of off the top of my head). I'd love to know some of the common issues that aren't considered when developing JS that will kill the performance of the site.

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • The convergence of Risk and Performance Management

    Historically, the market has viewed Enterprise Performance Management (EPM) and Governance, Risk and Compliance (GRC) as separate processes and solutions. But these two worlds are coming together – in fact industry analyst firms such as AMR Research believe that by the end of 2009, risk management will be part of every EPM discussion. Tune into this conversation with John O'Rourke, VP of Product Marketing for Oracle Enterprise Performance Management Solutions, and Karen dela Torre, Senior Director of Product Marketing for Financial Applications to learn how EPM and GRC are converging, what the integration points are, and what Oracle is doing to help customers perform more effective risk and performance management.

    Read the article

  • How to recognize my performance plateau?

    - by Dat Chu
    Performance plateau happens right after one becomes "adequately" proficient at a certain task. e.g. You learn a new language/framework/technology. You become better progressively. Then all of the sudden you realize that you have spent quite some time on this technology and you are not getting better at it. As a programmer who is conscious about my performance/knowledge/skill, how do I detect when I am in a performance plateau? What can I do to jump out of it (and keep going upward)?

    Read the article

  • Google I/O 2010 - Architecting for performance with GWT

    Google I/O 2010 - Architecting for performance with GWT Google I/O 2010 - Architecting for performance with GWT GWT 201 Joel Webber, Adam Schuck Modern web applications are quickly evolving to an architecture that has to account for the performance characteristics of the client, the server, and the global network connecting them. Should you render HTML on the server or build DOM structures with JS in the browser, or both? This session discusses this, as well as several other key architectural considerations to keep in mind when building your Next Big Thing. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 1 ratings Time: 01:01:09 More in Science & Technology

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • The Convergence of Risk and Performance Management

    Historically, the market has viewed Enterprise Performance Management (EPM) and Governance, Risk and Compliance (GRC) as separate processes and solutions. But these two worlds are coming together-in fact industry analyst firms such as AMR Research believe that by the end of 2009, risk management will be part of every EPM discussion. Tune into this conversation with John O'Rourke, VP of Product Marketing for Oracle Enterprise Performance Management Solutions, and Karen dela Torre, Senior Director of Product Marketing for Financial Applications to learn how EPM and GRC are converging, what the integration points are, and what Oracle is doing to help customers perform more effective risk and performance management.

    Read the article

  • ASP.NET Performance Framework

    At the start of the year, I finished a 5 part series on ASP.NET performance - focusing on largely generic ways to improve website performance rather than specific ASP.NET performance tricks. The series focused on a number of topics, including merging and shrinking files, using modules to remove unecessary headers and setting caching headers, enabling cache busting and automatically generating cache busted referneces in css, as well as an introduction to nginx. Yesterday I managed to put a number...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How many custom tabs can I add to a Facebook page?

    - by Maxi Ferreira
    I'm building a web application to create custom tabs and add them to the user's Facebook fanpages. I know how the process of "installing" FB apps into FB pages so they show up as Page Tabs works, but the problem is the client wants to allow the user to create unlimited Page Tabs for a single FB Page. So I have basically two questions. 1 - Can I resue a single FB App to be included into the same page several times? If so, is there a way to know what is the "id" of that Page Tab? So, if I have my FB App to look for the Tab content in http://www.mywebapp.com/tab/, I know I get a signed_request with the App ID and the Page ID, but if that same App is installed several times into the same Page, I don't know what's the Tab the user have cliked on. I know it's a little big messy, and I don't think there's a way to do this. So my next question is probably more accurate. 2 - Is there a limit on how many Tabs can I add to a single Facebook page? This way, if there's a limit of, say, 12 Tabs, I can create 12 FB Tab Apps, store the ID's and then I know which Tab of which Page the user is currently viewing. Thanks in advice! Maxi

    Read the article

  • GlusterFs - high load 90-107% CPU

    - by Sara
    I try and try and try to performance and fix problem with gluster, i try all. I served on gluster webpages, php files, images etc. I have problem after update from 3.3.0 to 3.3.1. I try 3.4 when i think maybe fix it but still the same problem. I temporarily have 1 brick, but before upgrade will be fine. Config: Volume Name: ... Type: Replicate Volume ID: ... Status: Started Number of Bricks: 0 x 2 = 1 Transport-type: tcp Bricks: Brick1: ...:/... Options Reconfigured: cluster.stripe-block-size: 128KB performance.cache-max-file-size: 100MB performance.flush-behind: on performance.io-thread-count: 16 performance.cache-size: 256MB auth.allow: ... performance.cache-refresh-timeout: 5 performance.write-behind-window-size: 1024MB I use fuse, hmm "Maybe the high load is due to the unavailable brick" i think about it, but i cant find information on how to safely change type of volume. Maybe u know how?

    Read the article

  • How to send web browser a loading page, then some time later a results page

    - by Kurt W. Leucht
    I've wasted at least a half day of my company's time searching the Internet for an answer and I'm getting wrapped around the axle here. I can't figure out the difference between all the different technology choices (long polling, ajax streaming, comet, XMPP, etc.) and I can't get a simple hello world example working on my PC. I am running Apache 2.2 and ActivePerl 5.10.0. JavaScript is completely acceptable for this solution. All I want to do is write a simple Perl CGI script that when accessed, it immediately returns some HTML that tells the user to wait or maybe sends an animated GIF. Then without any user intervention (no mouse clicks or anything) I want the CGI script to at some time later replace the wait message or the animated GIF with the actual results from their query. I know this is simple stuff and websites do it all the time using JavaScript, but I can't find a single working example that I can cut and paste onto my machine that will work in Perl. Here is my simple Hello World example that I've compiled from various Internet sources, but it doesn't seem to work. When I refresh this Perl CGI script in my web browser it prints nothing for 5 seconds, then it prints the PLEASE BE PATIENT web page, but not the results web page. So the Ajax XMLHttpRequest stuff obviously isn't working right. What am I doing wrong? #!C:\Perl\bin\perl.exe use CGI; use CGI::Carp qw/fatalsToBrowser warningsToBrowser/; sub Create_HTML { my $html = <<EOHTML; <html> <head> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <script type="text/javascript" > var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) // JScript gives us Conditional compilation, we can cope with old IE versions. // and security blocked creation of the objects. try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest!='undefined') { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp=false; } } if (!xmlhttp && window.createRequest) { try { xmlhttp = window.createRequest(); } catch (e) { xmlhttp=false; } } </script> <title>Ajax Streaming Connection Demo</title> </head> <body> Some header text. <p> <div id="response">PLEASE BE PATIENT</div> <p> Some footer text. </body> </html> EOHTML return $html; } my $cgi = new CGI; print $cgi->header; print Create_HTML(); sleep(5); print "<script type=\"text/javascript\">\n"; print "\$('response').innerHTML = 'Here are your results!';\n"; print "</script>\n";

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • Deletes that Split Pages and Forwarded Ghosts

    - by Paul White
    Can DELETE operations cause pages to split? Yes. It sounds counter-intuitive on the face of it; deleting rows frees up space on a page, and page splitting occurs when a page needs additional space. Nevertheless, there are circumstances when deleting rows causes them to expand before they can be deleted. The mechanism at work here is row versioning (extract from Books Online below): Isolation Levels The relationship between row-versioning isolation levels (the first bullet point) and page splits is...(read more)

    Read the article

  • Page Lifecycle - Using FindControl to reference a control created programatically during page load

    - by Jay Wilde
    Hi, I'm creating some text boxes on my form programatically which I need to reference later using FindControl. I've put the FindControl instruction in the page load method after the code which creates them but get an error: "Object reference not set to an instance of an object." I assume this is because the textbox controls are not created until later in the lifecycle and therefore cannot be referenced from within Page_Load. Can someone advise where in my code-behind I would need to place the FindControl instruction so that it can find these programatically created text boxes?

    Read the article

  • Page upload data again on page refresh in ASP.NET

    - by Etienne
    For some reason when the user click on the submit button and he re-fresh the page the same data get's uploaded again to my SQL Server 2005 database. I do not what this to happen........... Why is this happening? I am making use of a SQL Data Source!! My code Try 'See if user typed the correct code. If Me.txtSecurity.Text = Session("Captcha") Then If Session("NotifyMe") = "Yes" Then SendEmailNS() End If RaterRate.Insert() RaterRate.Update() DisableItems() lblResultNS.Text = "Thank you for leaving a comment" LoadCompanyList() LoadRateRecords() txtCommentNS.Text = "" txtSecurity.Text = "" lblResultNS.Focus() Else Session("Captcha") = GenerateCAPTCHACode() txtSecurity.Text = "" txtSecurity.Focus() Validator10.Validate() End If Catch ex As Exception lblResultNS.Visible = True lblResultNS.Text = ex.Message.ToString lblResultNS.Focus() End Try

    Read the article

  • I Can Not Use Session In Page _Load And I Got Bellow Error

    - by LostLord
    hi my dear friends .... why i got this error : (Object reference not set to an instance of an object.) when i put this code in my page_load.: protected void Page_Load(object sender, EventArgs e) { BackEndUtils.OverallLoader(); string Teststr = Session["Co_ID"].ToString(); } ========================================================================== this session is made when user logins to my web site and this session works in other areas... thanks for your attention

    Read the article

  • MVVM using Page Navigation On Windows Mobile 7

    - by anon
    The Navigation framework in Windows Mobile 7 is a cut down version of what is in Silverlight. You can only navigate to a Uri and not pass in a view. Since the NavigationService is tied to the View, how do people get this to fit into MVVM. For example: public class ViewModel : IViewModel { private IUnityContainer container; private IView view; public ViewModel(IUnityContainer container, IView view) { this.container = container; this.view = view; } public ICommand GoToNextPageCommand { get { ... } } public IView { get { return this.view; } } public void GoToNextPage() { // What do I put here. } } public class View : PhoneApplicationPage, IView { ... public void SetModel(IViewModel model) { ... } } I am using the Unity IOC container. I have to resolve my view model first and then use the View property to get hold of the view and then show it. However using the NavigationService, I have to pass in a view Uri. There is no way for me to create the view model first. Is there a way to get around this.

    Read the article

  • Django "Page not found" error page shows only one of two expected urls

    - by Frank V
    I'm working with Django, admittedly for the first time doing anything real. The URL config looks like the following: urlpatterns = patterns('my_site.core_prototype.views', (r'^newpost/$', 'newPost'), (r'^$', 'NewPostAndDisplayList'), # capture nothing... #more here... - perhaps the perma-links? ) This is in an app's url.py which is loaded from the project's url.py via: urlpatterns = patterns('', # only app for now. (r'^$', include('my_site.core_prototype.urls')), ) The problem is, when I receive a 404 attempting to utilize newpost, the error page only shows the ^$ -- it seems to ignore the newpost pattern... I'm sure the solution is probably stupid-simple but right now I'm missing it. Can someone help get me on the right track...

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Printer Page Size Problem

    - by mRt
    I am trying to set a custom paper size by doing: Printer.Height = 2160 Printer.Width = 11900 But it doesn't seen to have any effect. After setting this up, i ask for that values and it returns the default ones. And this: Printer.PaperSize = 256 Returns an error... Any ideas??

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >