Search Results

Search found 7821 results on 313 pages for 'high dpi'.

Page 13/313 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Very High CPU usage (100%) from just browsing the Web

    - by cole
    I tested on Firefox and Chromioum. Im at 100% while loading pages which causes them to load slow and when I dont have a application running Im at 40% CPU (At least) Everything is slow basically. Im also already on Ubuntu Classic so im not using Unity. Should I go to 10.04? is that more stable? On windows this wasnt an issue. I have a Dual Boot with XP and a 2.4Ghz Intel Celeron with 768MB RAM and an Nvidia 6200 Graphics card. I heard 10.04 was the most stable. any suggestions?

    Read the article

  • High I/O wait after login

    - by Jackson Tan
    I've noticed that the ubuntuone-syncdaemon hogs up the hard disk every time I log in to Ubuntu (10.04). This takes up to two or three minutes, which makes Ubuntu insufferably slow. Opening Firefox is okay, but the browser is constantly greyed out and lags horribly. Given that I often shut down my laptop when I don't use it (about 3 to 4 times a day), this makes Ubuntu lose much of its lustre because of its long boot time. Is this a normal behaviour of Ubuntu One? Is it intended? Note that I've actually posted this in the forums here, but I received only few replies.

    Read the article

  • Bug once in a while,but high priority

    - by Shirish11
    I am working on a CNC (computer numerical control) project which cuts shapes into metal with help of laser. Now my problem is once in a while (1-2 times in 20 odd days) the cutting goes wrong or not according to what is set. But this causes loss so the client is not very happy about it. I tried to find out the the cause of it by Including log files Debugging Repeating the same environment. But it wont repeat. A pause and continue operation will again make it to run smoothly with the bug reappearing. How do I tackle this issue? Should I state it as a Hardware Problem?

    Read the article

  • Learn How to Create High-Converting Landing Pages - Part 4

    Once you have placed call to action through your pay per click ads on the landing pages, you have to make sure that as part of your SEO plan, you must include paid keyword several times in your landing pages which will lead to well optimized pages under your search engine marketing efforts. As a smart SEO expert, it is important for you to include the keywords in a manner that it cannot be skipped by your readers because if it is missed, it will affect your PPC campaign management and search engine optimization results.

    Read the article

  • SEO title tag and earning a high rank on search engines [closed]

    - by Josh White
    Possible Duplicate: What are the best ways to increase your site's position in Google? One of the most basic SEO techiniques is including accurate description below 64 characters in the tags of each page. I was wondering if is considered ethical SEO to set up the contents based on a search keyword for example. So if the user searches for 'apples pictures' for example, then the title of the webpage would be 'apple pictures'. Note that the search keywords accurately describe my website contents because the title will always relate to the body of the webpage and 85-90% of the terms searched for will return corresponding results. Is this considered a good seo practice and is it ethical? Also, can someone explain what the idea is behind "linking"? I read somewhere that it is a good seo practice to link other websites and it is good when other websites link you. Does this mean that I should include as many links to other websites as possible (that are somehow relevant to my websites goal), also if I joined forums/services and posted my website url in the signature, would that still be considered other websites linking me?

    Read the article

  • High Traffic Web Host Solution? [duplicate]

    - by Calsy
    Possible Duplicate: How to find web hosting that meets my requirements? Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice.

    Read the article

  • How to do pragmatic high-level/meta-programming?

    - by Lenny222
    Imagine you have implemented the creation of a nice path-based star shape in Lisp. Then you discover Processing and you re-implement the whole code, because Processing/Java/Java2D is different. Then you want to tinker with libcinder, so you port your code to C++/Cairo. You are (re)writing a lot of boiler plate code, while the actual requirement "create a star shape" (or "create a path, moveto x y, lineto x y") has not changed. What are the options to encapsulate those implementation details? Some sort of pragmatic meta-programming? Maybe an expert system? How would you define your core business logic as language-independent as possible?

    Read the article

  • High level overview of Visual Studio Extensibility APIs

    - by Daniel Cazzulino
    If your head is dizzy with the myriad VS services and APIs, from EnvDTE to Shell.Interop, this should clarify a couple things. First a bit of background: APIs on EnvDTE (DTE for short, since that’s the entry point service you request from the environment) was originally an API intended to be used by macros. It’s also called the automation API. Most of the time, this is a simplified API that is easier to work with, but which doesn’t expose 100% of what VS is capable of doing. It’s also kind of the “rookie” way of doing VS extensibility (VSX for short), since most hardcore VSX devs sooner or later realize that they need to make the leap to the “serious” APIs. The “real” VSX APIs virtually always start with IVs, make heavy use of uint, ref/out parameters and HResults. These are the APIs that have been evolving for years and years, and there is a lot of COM baggage. ...Read full article

    Read the article

  • Botnets Keep Spam Volume High: Google

    <b>eSecurityPlanet:</b> "Botnets cranked out more spam and larger individual files containing spam in the first quarter of this year, according to the latest report from Postini, Google's e-mail filtering and security service."

    Read the article

  • High quality/performance shared hosting (in northern Europe)

    - by Bente
    I work as a web developer on almost all levels. However, my typical customer is a 1-5 guys running some sort of consulting business. They have (or want) a web page with some kind of CMS so the can perform most (or all) editing themselves. I normally opt for Concrete5 as my default CMS because it's the most user friendly (and free) CMS I have found. My good recurring customers I host on my own server as a service, but I need a good host for the customers where I want to deliver a product and not be responsible for whatever may happen in the future. However, I still struggle with hosting! Experience shows that the typical ~1$ shared hosting is waaay to slow to run concrete5 smoothly, and a VPS is out of the question because I don't want to maintain it. So, where can I find as fast (from northern Europe), reliable, shared host where I can put a site and don't have to worry about the server going down or being unmaintained. I expect this should cost around $10-$20 but I'm open to all kinds of suggestions because different customers have different budgets.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • A High Level Comparison Between Oracle and SQL Server

    Organisations often employ a number of database platforms in their information system architecture. It is not uncommon to see medium to large sized companies using three to four different RDBMS packages. Consequently the DBAs these companies look for often ... [Read Full Article]

    Read the article

  • High Traffic Web Host Solution?

    - by Calsy
    Hi All, Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice. Thanks in advance

    Read the article

  • Top Search Engine Optimization Strategies to Get High Rankings in Google For Your Business Websites

    The search Engine optimization strategies are mainly used to take your website rank to top 10 position. Having a higher rank in a search engine means that your business website will receive more traffic from Google and other search engines. This will eventually also mean that you will be able to have more customers and make more money off of your business, all at no additional cost to you!

    Read the article

  • Diagramming software with API allowing high customisation of shapes and actions

    - by jenson-button-event
    I am after something like Visio or Lucid. A relatively simple charting/diagramming tool, to build tree-like structures from (my) pre-defined nodes (squares), but with a powerful API. Requirements: limit the type of objects allowed to be dropped on the diagram validate a model (e.g. node of type A must precede node of type B; must enter node Title) export a model import a model Our domain is very specific, and its a tool we'd want to offer to some of our power users. The $500 Visio licence isn't really within the business model. I'll put no constraints on framework or deployment (web or desktop) - is there anything out there?

    Read the article

  • Xorg constant high cpu usage

    - by user157342
    CPU AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ Kernel 2.6.38-7.dmz.1-liquorix-amd64 X server version: X.Org X Server 1.9.0 OpenGL direct rendering: Yes OpenGL vendor: NVIDIA Corporation OpenGL renderer: GeForce 8400 GS/PCI/SSE2 OpenGL version: 3.3.0 NVIDIA 270.41.06 GCC version: 4.4.5 Java version: 1.6.0_20 Python version: 2.6.6 GTK version: 2.22.0 PyGTK version: 2.21.0 Firefox version: Mozilla Firefox 5.0 Ubuntu version: 10.10 GNOME version: 2.32.0 The issue is, the Xorg process always seems to be active with over 6% CPU and +50MB RAM usage, which in turn keeps the fans blowing all the time.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Project Management Techniques (high level)

    - by Sam J
    Our software dev team is currently using kanban for our development lifecycles, and, from the reasonably short experience of a few months, I think it's going quite well (certainly compared to a few months ago when we didn't really have a methodology). Our team, however, is directed to do work defined by project managers (not software project managers, just general business), and they're using the PMBOK methodology. Question is, how does a traditional methodology like PMBOK, Prince2 etc fit with a lean software development methodology like kanban or scrum? Is it just wasting everyone's time as all the requirements are effectively drawn up to start with (although inevitably changed along the way)?

    Read the article

  • 2D Collision in Canvas - Balls Overlapping When Velocity is High

    - by kushsolitary
    I am doing a simple experiment in canvas using Javascript in which some balls will be thrown on the screen with some initial velocity and then they will bounce on colliding with each other or with the walls. I managed to do the collision with walls perfectly but now the problem is with the collision with other balls. I am using the following code for it: //Check collision between two bodies function collides(b1, b2) { //Find the distance between their mid-points var dx = b1.x - b2.x, dy = b1.y - b2.y, dist = Math.round(Math.sqrt(dx*dx + dy*dy)); //Check if it is a collision if(dist <= (b1.r + b2.r)) { //Calculate the angles var angle = Math.atan2(dy, dx), sin = Math.sin(angle), cos = Math.cos(angle); //Calculate the old velocity components var v1x = b1.vx * cos, v2x = b2.vx * cos, v1y = b1.vy * sin, v2y = b2.vy * sin; //Calculate the new velocity components var vel1x = ((b1.m - b2.m) / (b1.m + b2.m)) * v1x + (2 * b2.m / (b1.m + b2.m)) * v2x, vel2x = (2 * b1.m / (b1.m + b2.m)) * v1x + ((b2.m - b1.m) / (b2.m + b1.m)) * v2x, vel1y = v1y, vel2y = v2y; //Set the new velocities b1.vx = vel1x; b2.vx = vel2x; b1.vy = vel1y; b2.vy = vel2y; } } You can see the experiment here. The problem is, some balls overlap each other and stick together while some of them rebound perfectly. I don't know what is causing this issue. Here's my balls object if that matters: function Ball() { //Random Positions this.x = 50 + Math.random() * W; this.y = 50 + Math.random() * H; //Random radii this.r = 15 + Math.random() * 30; this.m = this.r; //Random velocity components this.vx = 1 + Math.random() * 4; this.vy = 1 + Math.random() * 4; //Random shade of grey color this.c = Math.round(Math.random() * 200); this.draw = function() { ctx.beginPath(); ctx.fillStyle = "rgb(" + this.c + ", " + this.c + ", " + this.c + ")"; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); ctx.closePath(); } }

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >