Search Results

Search found 20388 results on 816 pages for 'nvidia current'.

Page 296/816 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Will the Global Demand for Water Outstrip the Supply by 2030?

    - by Evelyn Neumayr
    A recent study conducted by the Economist Intelligence Unit and sponsored by Oracle Utilities, titled “Water for All?”,considers the preparedness of utilities to supply water to the current global population of over 7 billion people, with a further 1 billion expected by 2030. It compares strategies used by utilities in 10 major countries to address this challenge. This study’s findings show that wide-ranging water management efforts and large-scale investments must be made if utilities are to meet near-certain water stress—demand outstripping supply—by 2030. The report is based on an online survey of 244 executives of water utilities in these countries, supplemented by in-depth interviews with 20 water utility executives and independent experts. The research concludes that utilities worldwide expect to meet future demand, despite increased supply pressure on supplies, due to improvements in water productivity that the wide range of measures utilities and governments will take to ensure that water is used more efficiently. Read more about this here.

    Read the article

  • Ubuntu odd mouse and keyboard behavior when window gains inner focus

    - by Scott
    This morning on my Ubuntu 10.10 system when I open a window - for example, system-preferences-about me, if I click in to a field such as "work email", I can no longer close the window with the mouse! Clicking the X on the window will not work. Also, I loose the ability to click on anything else - clicking on the desktop, icons, menus, workspaces, etc. do not work. Even the effect when you hover over a folder on the desktop and that folder highlights - that stops until the window is closed. If I open this same screen but do not click in to a field, everything is fine - I can close the window with the X and everything else works fine. Same thing happens with several other windows I tried. Even calculator - I can open it, everything is fine until I click on a button in the calculator - then no ability to click on anything else. Have to Alt-F4 to close the window. The system is only about a week old from a fresh install (64 bit ubuntu - quad core amd machine). I uninstalled wine, turned off remote desktop/disabled in startup, also in startup disabled visual assistance, bluetooth, dropbox, klipper. Reboot, no difference. The only other thing non-standard I see in startup is nvidia. Using a logitec usb mouse, saitek usb keyboard. Was working fine yesterday. I can not think of anything I did / installed yesterday. I switched themse, then went to update manager and saw two x server / x org related updates and installed them, reboot and NOW IT IS FINE! However, then I re-enabled dropbox, klipper and remote desktop rebooted and the problem is back. Again I disabled and rebooted. Problem is still there!! So somehow I fixed it at least for a few minutes, but now it is back and I am out of ideas.

    Read the article

  • git changing head not reflected on co-dev's branch

    - by stevekrzysiak
    Basically, we undid history. I know this is bad, and I am already committed to avoiding this at all costs in the future, but what is done is done. Anyway, I issued a git push origin <1_week_old_sha:master to undo some bad commits. I then deleted a buggered branch called release(which had also received some bad commits) from remote and then branched a new release off master. I pushed this to remote. So basically, remote master & release are clones and just how I want them. The issue is if I clone the repo anew(or work in my current repo) everything looks great....but when my co-devs delete their release branch and create a new one based off the new remote release I created, they still see all the old junk I tried to remove. I feel this has to do with some local .git files mistaking the new branch release for the old release. Any thoughts? Thanks.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • USB Storage Device Automount

    - by matto1990
    Under Ubuntu 10.04 one of the problems which appeared is that USB devices would no longer automatically mount when plugged in. Normally I would get a pop up message asking what application I wanted to open the newly plugged in device with, however now that doesn't happen. This happens regardless of the way the device is formatted (NTFS or FAT32) and all other USB devices (printer, keyboard and mouse) work perfectly. My current solution is the mount them manually using sudo mount dev/... /medai/... however to be honest I'm just getting tired of having to do this. I'm happy to post any extra information you are likely to need. I know there will be lots of places I could look to find out what's going wrong but I have no idea where to start really.

    Read the article

  • Case Management Model and Notation (CMMN) by Torsten Winterberg

    - by JuergenKress
    The beta version of the current working draft of the new OMG paper can be found here. This figure 72 shows an example, how a case (here: writing a document) can be modeled using CMMN elements: Table 43 explains, where the different types of decorators can be used: The meaning if the elements and the decorations are explained in the CMMN beta document. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: ACM,BPM,Torsten Winterberg,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • What is a modern software engineer's resume supposed to look like? [on hold]

    - by willOEM
    My secondary education was primarily focused on the sciences (Molecular Biology and Bioinformatics), but now I work primarily as a software developer. I feel like my resume is totally inappropriate for my current career goals. Listing laboratory skills and accomplishments is not the same as listing software development skills and accomplishments. Oddly enough, I can't seem to find any resources for writing a engineer's resume online that aren't older than 5 years. GitHub and StackExchange profiles seem like as important resume items as anything, yet I don't know how to list them on a resume, or whether they are appropriate at all. So what is a modern software engineer's resume supposed to look like?

    Read the article

  • Gigabit network limited to 25MB/s by CPU. How to make it faster?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The other computers in the network can usually transfer at 50+MB/s without problems. And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Programming for Digital frames

    - by spartan2417
    A project has recently come to my attention but i have no idea where to start or even if its possible. The idea revolves around programming a clock that is displayed inside a digital photo frame. The user would then be able to put different pictures corresponding to different times inside a usb pen for example, which would load as soon as you put the usb in. The project itself would be a really neat project - if it was just on a computer. I have no idea if what im talking about it even possible on a digital photo frame and if it is what language? Anyone who has any input at all would be great. My current idea is to maybe have a small device at the back, SSD, that runs the program through a screen, completely by passing standard digital photo frames, again though i dont know how to begin with this. And yes ive tried google (although it helps to know what to google).

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • Lining things up while using columns

    - by Charles
    I have a request that may not be possible. I'd like to line up the elements of a form so that the inputs all start at the same place: Name: [ ] Company: [ ] Some question with a long name: [ ] But my list is (somewhat) long and I would like to show them in multiple columns on screens that are wide enough. Ideally, I'd find a POSH method (table-free is semantically appropriate, I think) that works on a reasonable number of browsers. My current page uses a table. I tried CSS with columns: auto; -moz-column-count: auto; -moz-column-width: auto; -webkit-column-count: auto; -webkit-column-width: auto; but Firefox (at least) won't break a table across columns.

    Read the article

  • Is there other ways to do insert/update/delete on a remote oracle database?

    - by gunbuster363
    I asked a question recently concerning the speed of execution of insert/update/delete using JDBC driver in a remote machine, but the problem cannot be solved easily. I would like to ask, is there any other way to execute the insert/update/delete to the oracle? The current situation is this: the DB is on a seperate machine than the java program used to update the DB. I looked up the internet and found people suggesting using pure sql or pl/sql to do the update, is that possible? And do we need to operate the sql or pl/sql in a local machine? Because I have no knowledge about pl/sql, so I am not sure if we can create some kind of script and call it on a remote machine. Let say the situation is like this: the input data is on machine A, and the original java program are also on machine A, but the oracle is on machine B. is there any other approach other than JDBC?

    Read the article

  • Problem with the screen resolution on 12.04

    - by sveinn
    I just installed Ubuntu on my laptop. The screen resolution is stuck in 1024x768. The screen is made for 1280x800. When I run xrandr I get: xrandr: Failed to get size of gamma for output default Screen 0: minimum 800 x 600, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 61.0* 800x600 61.0 1280x800 isn't offered and I get gamma size error. I was going to look into the Xorg.conf file but I couldn't locate it. 1280x800 was displayed in Windows 7 and I think it is being displayed in Grub before Ubuntu starts also. Here are some details about my computer: CPU: Intel atom D2500 1.86GHz Chipset: Intel 945GSE+ICH7M LCD: 14" TFT 16:9 Resolution ratio: 1280*800 Video Card: Intel integrated GMA950 Does anyone know how to fix this?

    Read the article

  • $ON_USER returning root instead of $USER

    - by Nathanel Titane
    Hello everybody! With Natty coming out soon, I've been at work updating my deployment and self-config script to make my desktop on 11.04 run and look the way I want it to. One bummer is that dbus seems to have changed and does not permit, in the same manner Lucid and Maverick did, the authentication of the current user by terminal call using grep and cat. Ideally, to run the script, I would sudo -s and then launch it as # chmod +x install && ./install Instead of returning my user name.. it now returns root and applies changes to the root profile and aborts whenever paths do not correspond. Here is my script header: #!/bin/bash ON_USER=$(echo ~ | awk -F'/' '{ print $1 $2 $3 }' | sed 's/home//g') export $(grep -v "^#" ~/.dbus/session-bus/`cat /var/lib/dbus/machine-id`-0) if sudo -u $ON_USER test -z "$DBUS_SESSION_BUS_ADDRESS" ; then eval `sudo -u $ON_USER dbus-launch --sh-syntax --exit-with-session` fi RELEASE=$(lsb_release -cs) How could I make it return the actual user now that natty is coming? Thanks for the help

    Read the article

  • test coverage reality

    - by iPhoneDeveloper
    I am NOT doing test driven development and I write my test classes after the actual code is written. In my current project I have a test coverage of(Line coverage) %70 for 3000 lines of Java code.(Using JUnit, Mockito and Sonar for testing) But while I feel actually I am not covering and catching %70 of the problems that can occur. So my question is in theory is that possible to have a %100 Line coverage but in reality it is meaningless because of low quality of the test code and maybe a %40 well written test code is much better than a bad %100 coverage? or we can always say line coverage more or less gives the percentage of all covered issues?

    Read the article

  • Windows 7 Upgrades to Come at Cheaper Prices

    Are you a Windows 7 user Did you know that you can upgrade your current version of Windows 7 to a more advanced version with little effort Well if you didn t know that by now now you know. Many others left in the dark about this ease in upgrades will likely be informed soon as Microsoft launched an upgrade assault of sorts on consumers on April 4.... Why Respond to SQL Failures? PREVENT IT DOWNLOAD this Free Paper. 5 Tips to Eliminating SQL Server Downtime. Learn More!

    Read the article

  • Expected salary for software engineer? Am I under or over paid? [closed]

    - by Asdasd Asdasd
    I work for a reasonably large tech company in Boston, MA. My company has about 1.2 billion in revenue and around 3500 employees. I have 6 years of industry experience and my current pay package is as follows: Base salary: 97,000 bonus: 10,000/year (everyone always gets 100% of this... i don't know why they bother call it bonus) RSU stock: 8000/year at present day valuation. My vesting schedule covers me for the next 5 years. that brings my total pay to ~ 115,000/year Given that, would folks say I am under/average/over paid? I read so much about how engineers at google and facebook are making ridiculous sums of money (almost 200k with bonuses included) and it makes me question my pay package. thanks

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Real Widget Adds WP7-like Tiles to Android

    - by Jason Fitzpatrick
    Android: If you want the look of Windows Phone 7 tiles on your Android phone without completely replacing your launcher and interface, Real Widget offers the shortcut tiles without the total overhaul. You can customize the widgets to launch apps, system functions, and more to enjoy the WP7 tiled look without sacrificing the functionality of your current Android launcher. Hit up the link below to check out more screenshots and free copy to take for a spin. Real Widget is Android 4.0+ only. Real Widget [via Addictive Tips] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Best place to learn Component Object Model

    - by Ritwik G
    I need to learn COM for my current project. I am a noob and I haven't been able to find good starting points for COM. I have been looking into MSDN and googling... I did find some interesting articles on codeproject.com, but I am not satisfied. Why do we use COM ? Why does it exist ? In what places does it exist ? ..... and so on .. So, will you please tell me where can I find answers to these questions ?

    Read the article

  • Java Certification Programs in India

    - by user2948246
    I am a current Masters student studying in the USA. My major is Computer Science. I did not get any summer internship and thus wanted to update my resume. I decided to come to india to do some courses. I want to know if a Java certification program from NIIT or APTECH holds any weightage in the US when i apply for my full time. I also plan on taking up a course on Hadoop and learn a couple of languages on my own.

    Read the article

  • Programmer's career path

    - by kender
    I've been working as a programmer for the last few years - different companies and freelancing, mostly developing internal-business web applications (well, that's the current model of development, it seems). Besides simple coding I was working on specs, designing applications, and all those around-like things. My question is, what's the career path I should be aiming for? Is it like working on code for the rest of my life? :) Or do programmers make a good manager-position people (I know, those require quite different set of skills) and I should try to improve myself to this direction? I know it's very subjective. Thing is, lately I find myself much more into the designing/working on specs part of the development project then the coding itself. How do you see it? Would you like to go from development to management? Would you like to work on a project with a manager that used to be a coder? Would you like to hire one? :)

    Read the article

  • Begginer help: where to begin

    - by shad
    I want to learn how to program. A main stream programming languages like Java, C++/C# is my primary target. Current, i am a high school student and planning to take programming, Digital electronics courses next semester. My biggest problem is that I do not know where to start and I have no one to consult with. Should I take a course at my local community college this summer? and get some books or try learning from some internet websites? What would be the best option a book or website?

    Read the article

  • Engines of Loss and Gain

    - by andyleonard
    Introduction This post is the fortieth part of a ramble-rant about the software business. The current posts in this series can be found on the series landing page . This post is about winning (no really). NASCAR I like watching NASCAR races. On the surface, a race looks like a bunch of folks driving fast on a circuitous course. But there’s much more to it than that. There’s engineering and strategy and frankly, a little luck. A NASCAR race is a lot like life when you look beneath the surface. Forty-three...(read more)

    Read the article

  • Combine auto-syncing cloud and VCS

    - by ComFreek
    This question brought me to another question: is there any VCS/tool for a VCS which automatically backups your source code between the last checkout and current changes? I had the problem of loosing uncommited source code changes just one week ago. I did not want to commit yet because the changes were incomplete. But then, an error when moving the data to an USB stick caused the data loss. That's the opposite what a cloud service (like Google Drive, SkyDrive, DropBox, ...) does: it tracks each change you made! Have you lost your data? That's no problem because you have the latest version online. So what would a combined solution look like? It would offer full functionality of a VCS including auto-syncing of any intermediate changes between two commits/checkouts to a temporary online location.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >