Search Results

Search found 8328 results on 334 pages for 'dour high arch'.

Page 15/334 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How to do pragmatic high-level/meta-programming?

    - by Lenny222
    Imagine you have implemented the creation of a nice path-based star shape in Lisp. Then you discover Processing and you re-implement the whole code, because Processing/Java/Java2D is different. Then you want to tinker with libcinder, so you port your code to C++/Cairo. You are (re)writing a lot of boiler plate code, while the actual requirement "create a star shape" (or "create a path, moveto x y, lineto x y") has not changed. What are the options to encapsulate those implementation details? Some sort of pragmatic meta-programming? Maybe an expert system? How would you define your core business logic as language-independent as possible?

    Read the article

  • High level overview of Visual Studio Extensibility APIs

    - by Daniel Cazzulino
    If your head is dizzy with the myriad VS services and APIs, from EnvDTE to Shell.Interop, this should clarify a couple things. First a bit of background: APIs on EnvDTE (DTE for short, since that’s the entry point service you request from the environment) was originally an API intended to be used by macros. It’s also called the automation API. Most of the time, this is a simplified API that is easier to work with, but which doesn’t expose 100% of what VS is capable of doing. It’s also kind of the “rookie” way of doing VS extensibility (VSX for short), since most hardcore VSX devs sooner or later realize that they need to make the leap to the “serious” APIs. The “real” VSX APIs virtually always start with IVs, make heavy use of uint, ref/out parameters and HResults. These are the APIs that have been evolving for years and years, and there is a lot of COM baggage. ...Read full article

    Read the article

  • Botnets Keep Spam Volume High: Google

    <b>eSecurityPlanet:</b> "Botnets cranked out more spam and larger individual files containing spam in the first quarter of this year, according to the latest report from Postini, Google's e-mail filtering and security service."

    Read the article

  • High quality/performance shared hosting (in northern Europe)

    - by Bente
    I work as a web developer on almost all levels. However, my typical customer is a 1-5 guys running some sort of consulting business. They have (or want) a web page with some kind of CMS so the can perform most (or all) editing themselves. I normally opt for Concrete5 as my default CMS because it's the most user friendly (and free) CMS I have found. My good recurring customers I host on my own server as a service, but I need a good host for the customers where I want to deliver a product and not be responsible for whatever may happen in the future. However, I still struggle with hosting! Experience shows that the typical ~1$ shared hosting is waaay to slow to run concrete5 smoothly, and a VPS is out of the question because I don't want to maintain it. So, where can I find as fast (from northern Europe), reliable, shared host where I can put a site and don't have to worry about the server going down or being unmaintained. I expect this should cost around $10-$20 but I'm open to all kinds of suggestions because different customers have different budgets.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • A High Level Comparison Between Oracle and SQL Server

    Organisations often employ a number of database platforms in their information system architecture. It is not uncommon to see medium to large sized companies using three to four different RDBMS packages. Consequently the DBAs these companies look for often ... [Read Full Article]

    Read the article

  • Top Search Engine Optimization Strategies to Get High Rankings in Google For Your Business Websites

    The search Engine optimization strategies are mainly used to take your website rank to top 10 position. Having a higher rank in a search engine means that your business website will receive more traffic from Google and other search engines. This will eventually also mean that you will be able to have more customers and make more money off of your business, all at no additional cost to you!

    Read the article

  • High Traffic Web Host Solution?

    - by Calsy
    Hi All, Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice. Thanks in advance

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Diagramming software with API allowing high customisation of shapes and actions

    - by jenson-button-event
    I am after something like Visio or Lucid. A relatively simple charting/diagramming tool, to build tree-like structures from (my) pre-defined nodes (squares), but with a powerful API. Requirements: limit the type of objects allowed to be dropped on the diagram validate a model (e.g. node of type A must precede node of type B; must enter node Title) export a model import a model Our domain is very specific, and its a tool we'd want to offer to some of our power users. The $500 Visio licence isn't really within the business model. I'll put no constraints on framework or deployment (web or desktop) - is there anything out there?

    Read the article

  • Xorg constant high cpu usage

    - by user157342
    CPU AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ Kernel 2.6.38-7.dmz.1-liquorix-amd64 X server version: X.Org X Server 1.9.0 OpenGL direct rendering: Yes OpenGL vendor: NVIDIA Corporation OpenGL renderer: GeForce 8400 GS/PCI/SSE2 OpenGL version: 3.3.0 NVIDIA 270.41.06 GCC version: 4.4.5 Java version: 1.6.0_20 Python version: 2.6.6 GTK version: 2.22.0 PyGTK version: 2.21.0 Firefox version: Mozilla Firefox 5.0 Ubuntu version: 10.10 GNOME version: 2.32.0 The issue is, the Xorg process always seems to be active with over 6% CPU and +50MB RAM usage, which in turn keeps the fans blowing all the time.

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Project Management Techniques (high level)

    - by Sam J
    Our software dev team is currently using kanban for our development lifecycles, and, from the reasonably short experience of a few months, I think it's going quite well (certainly compared to a few months ago when we didn't really have a methodology). Our team, however, is directed to do work defined by project managers (not software project managers, just general business), and they're using the PMBOK methodology. Question is, how does a traditional methodology like PMBOK, Prince2 etc fit with a lean software development methodology like kanban or scrum? Is it just wasting everyone's time as all the requirements are effectively drawn up to start with (although inevitably changed along the way)?

    Read the article

  • 2D Collision in Canvas - Balls Overlapping When Velocity is High

    - by kushsolitary
    I am doing a simple experiment in canvas using Javascript in which some balls will be thrown on the screen with some initial velocity and then they will bounce on colliding with each other or with the walls. I managed to do the collision with walls perfectly but now the problem is with the collision with other balls. I am using the following code for it: //Check collision between two bodies function collides(b1, b2) { //Find the distance between their mid-points var dx = b1.x - b2.x, dy = b1.y - b2.y, dist = Math.round(Math.sqrt(dx*dx + dy*dy)); //Check if it is a collision if(dist <= (b1.r + b2.r)) { //Calculate the angles var angle = Math.atan2(dy, dx), sin = Math.sin(angle), cos = Math.cos(angle); //Calculate the old velocity components var v1x = b1.vx * cos, v2x = b2.vx * cos, v1y = b1.vy * sin, v2y = b2.vy * sin; //Calculate the new velocity components var vel1x = ((b1.m - b2.m) / (b1.m + b2.m)) * v1x + (2 * b2.m / (b1.m + b2.m)) * v2x, vel2x = (2 * b1.m / (b1.m + b2.m)) * v1x + ((b2.m - b1.m) / (b2.m + b1.m)) * v2x, vel1y = v1y, vel2y = v2y; //Set the new velocities b1.vx = vel1x; b2.vx = vel2x; b1.vy = vel1y; b2.vy = vel2y; } } You can see the experiment here. The problem is, some balls overlap each other and stick together while some of them rebound perfectly. I don't know what is causing this issue. Here's my balls object if that matters: function Ball() { //Random Positions this.x = 50 + Math.random() * W; this.y = 50 + Math.random() * H; //Random radii this.r = 15 + Math.random() * 30; this.m = this.r; //Random velocity components this.vx = 1 + Math.random() * 4; this.vy = 1 + Math.random() * 4; //Random shade of grey color this.c = Math.round(Math.random() * 200); this.draw = function() { ctx.beginPath(); ctx.fillStyle = "rgb(" + this.c + ", " + this.c + ", " + this.c + ")"; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); ctx.closePath(); } }

    Read the article

  • CPU temperature is high on Ubuntu

    - by Kaspar
    So I have a machine which has both latest *Ubuntu and Windows 8 on it. On windows 8 my CPU temp is roughtly 26 degrees when idle. Now when I boot into Ubuntu, CPU temperature is suddenly 43 degrees when idle, plus my fans are making a lot of noise which is probably because of the CPU degrees. Why is that? Everywhere I read it says the Linux is much better at managing CPU and so on. But yet it seems something is wrong. It is a stock installation of Ubuntu 14.04 and my CPU is a Intel i5-4570

    Read the article

  • Identify high CPU consumed thread for Java app

    - by Vincent Ma
    Following java code to emulate busy and Idle thread and start it. import java.util.concurrent.*;import java.lang.*; public class ThreadTest {    public static void main(String[] args) {        new Thread(new Idle(), "Idle").start();        new Thread(new Busy(), "Busy").start();    }}class Idle implements Runnable {    @Override    public void run() {        try {            TimeUnit.HOURS.sleep(1);        } catch (InterruptedException e) {        }    }}class Busy implements Runnable {    @Override    public void run() {        while(true) {            "Test".matches("T.*");        }    }} Using Processor Explorer to get this busy java processor and get Thread id it cost lots of CPU see the following screenshot: Cover to 4044 to Hexadecimal is oxfcc. Using VistulVM to dump thread and get that thread. see the following screenshot In Linux you can use  top -H to get Thread information. That it! Any question let me know. Thanks

    Read the article

  • How do i fix the slow scroll in browsers and high XOrg cpu usage

    - by Virgil
    I am facing an issue while scrolling in browsers(firefox, chrome, and opera) , scroll is jagged and slow. Also when scrolling the cpu usage spikes. I am currently running ubuntu natty(beta 1), switched from ubuntu 10.10 where the problem was worse. I am using the nvidia beta driver, which ubuntu installed automatically. My graphic card is nvidia Quadro NVS 150M. I tried running ubuntu without the effects on , but when using multiple applications at the same time xorg usage spikes again. Additional info: 2GB of RAM and an intel core 2 duo processor. Any help would be greatly appreciated. Thanks in advance!

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • Actual High Speed USB flash drive

    - by CSkau
    I'm looking to upgrade my EEE 1000H by possibly replacing the HDD with simple (internal) usb connected storage. The problem I'm having now is that I can't seem to find any actual high speed usb sticks. They all proclaim high speeds, but usually turn out to be ~30 mb/s - much lower than the 60 mb/s (480 mbit/s / 8 ) I understand USB 2.0 is at - no ? Can anyone enlighten me as to why no USB sticks seem to go past that low bar or alternatively point me in the direction of some actual high speed usb sticks ? Any help is greatly appreciated :) Cheers!

    Read the article

  • PC noisy due to high load - why?

    - by Jinx
    I just installed an ubuntu on my pc(Dell Inspiration I560SR-358, with CPU E5700 3GHz, 4G memory, and NVIDIA GeForce G310). The pc becomes noisy before that it's quiet with a windows 7 on it. How come? How to set it to be quiet again in unbuntu 10.04. One of the two cpu usage is always 100%. I think that is the reason. //re-edit Everything gets ok after I restart the computer.But the fan is still running which makes it noisy, if i switch to windows 7, it becomes quiet again.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >