Search Results

Search found 9233 results on 370 pages for 'high school'.

Page 15/370 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • Is anyone teaching the application of ethics to programming?

    - by blueberryfields
    Ethical questions come up more and more in the news, and can be core to a software developers' life. On several occasions this year I was asked for advice that amounts to answering ethical questions, in a computer programming context, and I was surprised to see how much fretting and trouble questions which I consider trivial can lead to. Are there any courses or programs specializing in teaching what is expected of an ethical programmer? Has anyone put together a formal curriculum anywhere?

    Read the article

  • Very High CPU usage (100%) from just browsing the Web

    - by cole
    I tested on Firefox and Chromioum. Im at 100% while loading pages which causes them to load slow and when I dont have a application running Im at 40% CPU (At least) Everything is slow basically. Im also already on Ubuntu Classic so im not using Unity. Should I go to 10.04? is that more stable? On windows this wasnt an issue. I have a Dual Boot with XP and a 2.4Ghz Intel Celeron with 768MB RAM and an Nvidia 6200 Graphics card. I heard 10.04 was the most stable. any suggestions?

    Read the article

  • Constructive criticism for my bounce rate being so high [closed]

    - by Daniel
    The bounce rate on my website's product pages is 80%, which is terrible. Could you offer any opinions on whether you consider the user experience to be bad, and how I could possibly improve it? Other pages, such as the home and category pages, have acceptable bounce rates, but the vast majority of my traffic lands on the product pages. I already tried removing some Google ads for a couple of days, but this didn't seem to help at all. I'm working on doing A/B testing at the moment. (It's tricky, as the site is based on a CMS - I custom coded the [Joomla] component, so hopefully I can get this testing working.)

    Read the article

  • High resolution CLI?

    - by Mike Williamson
    I want the resolution of my console to match my screen resolution(1440x900). 1024x768 works fine but for some reason when I put 1440x900 when I switch to ttyX the command prompt is almost right off the bottom of the screen! The Ubuntu splash screen goes off the edge of the screen during boot as well. Here is my /etc/default/grub 4 GRUB_DEFAULT=0 5 GRUB_HIDDEN_TIMEOUT=0 6 GRUB_HIDDEN_TIMEOUT_QUIET=true 7 GRUB_TIMEOUT=10 8 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` 9 GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" 10 GRUB_CMDLINE_LINUX="" 11 GRUB_GFXMODE=1440x900 12 GRUB_GFXPAYLOAD_LINUX=keep How do I get my CLI resolution to be 1440x900?

    Read the article

  • High I/O wait after login

    - by Jackson Tan
    I've noticed that the ubuntuone-syncdaemon hogs up the hard disk every time I log in to Ubuntu (10.04). This takes up to two or three minutes, which makes Ubuntu insufferably slow. Opening Firefox is okay, but the browser is constantly greyed out and lags horribly. Given that I often shut down my laptop when I don't use it (about 3 to 4 times a day), this makes Ubuntu lose much of its lustre because of its long boot time. Is this a normal behaviour of Ubuntu One? Is it intended? Note that I've actually posted this in the forums here, but I received only few replies.

    Read the article

  • About plagiarism

    - by user20018
    Hi I don't know if it is appropriate to ask this question here. The moderator can delete this question after getting answered if he finds this question as inappropriate in this forum I'm performing a project course work in my computer science department. For example I don't know how to develop logic for some requirements in my project and I find the code in some tutorial. If i implement the feature along the same lines mentioned in that tutorial. Does it constitute academic plagiarism ?? Thank you in anticipation.

    Read the article

  • Bug once in a while,but high priority

    - by Shirish11
    I am working on a CNC (computer numerical control) project which cuts shapes into metal with help of laser. Now my problem is once in a while (1-2 times in 20 odd days) the cutting goes wrong or not according to what is set. But this causes loss so the client is not very happy about it. I tried to find out the the cause of it by Including log files Debugging Repeating the same environment. But it wont repeat. A pause and continue operation will again make it to run smoothly with the bug reappearing. How do I tackle this issue? Should I state it as a Hardware Problem?

    Read the article

  • Procedurally generated 2d terrain for side scroller on Sega Genesis hardware?

    - by DJCouchyCouch
    I'm working on the Sega Genesis that has a 8mhz Motorola 68000 CPU. Any ideas on how to generate fast and decent 2d tile terrain for a side scroller in real time? The game would generate new columns or rows depending on the direction the player is scrolling in. The generation would have to be deterministic. The same seed value would generate the same terrain. I'm looking for algorithms that would satisfy the memory and CPU constraints of the hardware.

    Read the article

  • Learn How to Create High-Converting Landing Pages - Part 4

    Once you have placed call to action through your pay per click ads on the landing pages, you have to make sure that as part of your SEO plan, you must include paid keyword several times in your landing pages which will lead to well optimized pages under your search engine marketing efforts. As a smart SEO expert, it is important for you to include the keywords in a manner that it cannot be skipped by your readers because if it is missed, it will affect your PPC campaign management and search engine optimization results.

    Read the article

  • SEO title tag and earning a high rank on search engines [closed]

    - by Josh White
    Possible Duplicate: What are the best ways to increase your site's position in Google? One of the most basic SEO techiniques is including accurate description below 64 characters in the tags of each page. I was wondering if is considered ethical SEO to set up the contents based on a search keyword for example. So if the user searches for 'apples pictures' for example, then the title of the webpage would be 'apple pictures'. Note that the search keywords accurately describe my website contents because the title will always relate to the body of the webpage and 85-90% of the terms searched for will return corresponding results. Is this considered a good seo practice and is it ethical? Also, can someone explain what the idea is behind "linking"? I read somewhere that it is a good seo practice to link other websites and it is good when other websites link you. Does this mean that I should include as many links to other websites as possible (that are somehow relevant to my websites goal), also if I joined forums/services and posted my website url in the signature, would that still be considered other websites linking me?

    Read the article

  • How to mimic the same fixed-size horizon as in this racing game?

    - by Aybe
    I am trying to replicate the same horizon (buildings and sky) as in the image below: As you can see, the player has advanced in a straight line, yet the horizon has still the same size: This is my attempt using 3D, while it's okay when the player is on the start line: It's not so great when the player advanced as much as in the image no. 2: This is an overview of where the horizon buildings and sky are located: Obviously this won't achieve such effect when one is close to it, so I've tried to scale up the horizon on all axes but the problem is that the buildings are too small depending where you look at them from. How can one mimic such rendering ?

    Read the article

  • How to do pragmatic high-level/meta-programming?

    - by Lenny222
    Imagine you have implemented the creation of a nice path-based star shape in Lisp. Then you discover Processing and you re-implement the whole code, because Processing/Java/Java2D is different. Then you want to tinker with libcinder, so you port your code to C++/Cairo. You are (re)writing a lot of boiler plate code, while the actual requirement "create a star shape" (or "create a path, moveto x y, lineto x y") has not changed. What are the options to encapsulate those implementation details? Some sort of pragmatic meta-programming? Maybe an expert system? How would you define your core business logic as language-independent as possible?

    Read the article

  • High Traffic Web Host Solution? [duplicate]

    - by Calsy
    Possible Duplicate: How to find web hosting that meets my requirements? Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice.

    Read the article

  • High level overview of Visual Studio Extensibility APIs

    - by Daniel Cazzulino
    If your head is dizzy with the myriad VS services and APIs, from EnvDTE to Shell.Interop, this should clarify a couple things. First a bit of background: APIs on EnvDTE (DTE for short, since that’s the entry point service you request from the environment) was originally an API intended to be used by macros. It’s also called the automation API. Most of the time, this is a simplified API that is easier to work with, but which doesn’t expose 100% of what VS is capable of doing. It’s also kind of the “rookie” way of doing VS extensibility (VSX for short), since most hardcore VSX devs sooner or later realize that they need to make the leap to the “serious” APIs. The “real” VSX APIs virtually always start with IVs, make heavy use of uint, ref/out parameters and HResults. These are the APIs that have been evolving for years and years, and there is a lot of COM baggage. ...Read full article

    Read the article

  • Botnets Keep Spam Volume High: Google

    <b>eSecurityPlanet:</b> "Botnets cranked out more spam and larger individual files containing spam in the first quarter of this year, according to the latest report from Postini, Google's e-mail filtering and security service."

    Read the article

  • High quality/performance shared hosting (in northern Europe)

    - by Bente
    I work as a web developer on almost all levels. However, my typical customer is a 1-5 guys running some sort of consulting business. They have (or want) a web page with some kind of CMS so the can perform most (or all) editing themselves. I normally opt for Concrete5 as my default CMS because it's the most user friendly (and free) CMS I have found. My good recurring customers I host on my own server as a service, but I need a good host for the customers where I want to deliver a product and not be responsible for whatever may happen in the future. However, I still struggle with hosting! Experience shows that the typical ~1$ shared hosting is waaay to slow to run concrete5 smoothly, and a VPS is out of the question because I don't want to maintain it. So, where can I find as fast (from northern Europe), reliable, shared host where I can put a site and don't have to worry about the server going down or being unmaintained. I expect this should cost around $10-$20 but I'm open to all kinds of suggestions because different customers have different budgets.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • A High Level Comparison Between Oracle and SQL Server

    Organisations often employ a number of database platforms in their information system architecture. It is not uncommon to see medium to large sized companies using three to four different RDBMS packages. Consequently the DBAs these companies look for often ... [Read Full Article]

    Read the article

  • Top Search Engine Optimization Strategies to Get High Rankings in Google For Your Business Websites

    The search Engine optimization strategies are mainly used to take your website rank to top 10 position. Having a higher rank in a search engine means that your business website will receive more traffic from Google and other search engines. This will eventually also mean that you will be able to have more customers and make more money off of your business, all at no additional cost to you!

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >