Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 62/184 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How to document and teach others "optimized beyond recognition" computationally intensive code?

    - by rwong
    Occasionally there is the 1% of code that is computationally intensive enough that needs the heaviest kind of low-level optimization. Examples are video processing, image processing, and all kinds of signal processing, in general. The goals are to document, and to teach the optimization techniques, so that the code does not become unmaintainable and prone to removal by newer developers. (*) (*) Notwithstanding the possibility that the particular optimization is completely useless in some unforeseeable future CPUs, such that the code will be deleted anyway. Considering that software offerings (commercial or open-source) retain their competitive advantage by having the fastest code and making use of the newest CPU architecture, software writers often need to tweak their code to make it run faster while getting the same output for a certain task, whlist tolerating a small amount of rounding errors. Typically, a software writer can keep many versions of a function as a documentation of each optimization / algorithm rewrite that takes place. How does one make these versions available for others to study their optimization techniques?

    Read the article

  • BI&EPM in Focus Oct 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers Iluka Resources Improves Business Insight into Mining Operations Through Significantly Faster, Customized Analyses Banco do Brasil Monitors Budgets in Real Time, Generates Financial Reports In Minutes Instead of Months General Dynamics Improves Budgeting and Planning and Accelerates Rate Changes by Using Integrated Enterprise Performance Management Suite Facebook achieves world-wide automation of financial close task tracking and management of account reconciliations with Oracle Hyperion Financial Close Management (link) Hess Consolidates Multiple SAP General Ledgers with Oracle Hyperion (link) Navistar Leads with Cutting Edge Hyperion Platform, Including HSF, HPCM (link)   Enterprise Performance Management Oct 10: Navistar Leverages DRM (Rolta Solutions) (link) Replay: Integrated Business Planning, Featuring Leggett & Platt (link)   Business Intelligence Report: From Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link | press release) Oct 10: The Top Five Things You Should Know When Migrating from an Old BI Technology to Oracle Business Intelligence Enterprise Edition (perfomance architects) (link)

    Read the article

  • Jaroslav Tulach's Report on NetBeans at OSGiCon

    - by Geertjan
    The latest NetBeans Podcast was recorded over the last few weeks and released yesterday. Aside from the NetBeans news items and interviews (interesting stuff about Joel Murach's new Java book using NetBeans, as well as the new developments in the NetBeans Groovy editor), there is, as always an "API Design Tip" of the podcast. That's really worth listening to, always of course, but especially this time because here Jaroslav Tulach talks at some length about his recent trip to OSGiCon, as well as the history and status of OSGi support in NetBeans IDE. Start listening from just before the 30th minute (i.e., the final segment) if you're interested in this particular topic: https://blogs.oracle.com/nbpodcast/entry/netbeans_podcast_60 For example, hear about how JDeveloper got faster by switching from Equinox to Netbinox. And... will Eclipse find itself on the same OSGi container too?

    Read the article

  • Using IE 9 as my primary browser

    - by Robert May
    With the release of Internet Explorer 9 RC the browser looks to be in a usable state.  So far, my experience has been positive. However, one area where I am having problems is when people are using the jQueryUI library.  Versions older than 1.8 cause IE 9.0 to be unable to drag and drop.  This is a real pain, especially at sites like Agile Zen, where dragging and dropping is a primary bit of functionality. Now that IE 9 is a release candidate, we’ll see how quickly these things improve.  I expect things to be rough, but so far, I’m really liking IE 9.  There’s more real estate than Chrome (it’s the tabs inline with the address bar) and its faster than Chrome 9.0 and FF 3.6.8 (as tested on my own machine). The biggest drawback so far is that because IE has been so badly behaved in the past, sites expect it to be badly behaved now, which is breaking things now. Technorati Tags: Internet Explorer

    Read the article

  • Is IE9 a modern browser?

    - by TATWORTH
    At http://people.mozilla.com/~prouget/ie9/ there is a very provocative article entitled "Is IE9 a modern browser?". There is a rebuttal by Tim Sneath at http://blogs.msdn.com/b/tims/archive/2011/02/15/a-modern-browser.aspx that is well worth a look. Certainly IE9 is already superior to its predecessors. My comment on the matter is that those that consider IE9 to be non-standards compliant, should submit tests to the W3C to demonstrate the non-compliance. Upon acceptance by the W3C, all the competing browsers can then be re-tested. I prefer objective tests to subjective opinion. I have used IE9 and on some sites such as Hotmail, it is noticeably faster. I have so far been unable to apply the promised IE9 lockout of spyware cookies. With Firefox, I just instal NoScript and never enable spyware sites.

    Read the article

  • What makes an application memory bandwidth bound?

    - by TheLQ
    This has been something that's been bothering me for a while: What makes an application memory bandwidth bound? For example, take this monstrosity of a computer that calculated the 5 trillionth digit of pi (and later 10 trillionth digit). I was surprised that they choose the lower but faster 98 GB RAM at 1066 MHz instead of the larger but slower 144 GB at 800 MHz. This is especially surprising considering they are using 22 TB HD array to store the results from computation; more RAM means less need for hard drives. Maybe its because I don't write applications for HPC servers, but how would RAM be the bottleneck? Are there any other non-HPC applications that usually run into this problem?

    Read the article

  • Google I/O 2012 - Optimizing Your Code Using Features of Google APIs

    Google I/O 2012 - Optimizing Your Code Using Features of Google APIs Sven Mawson Google APIs support a variety of features designed to enable state of the art development. In this session, you will learn how to create applications that use performance enhancing features to make your code run faster and use fewer resources. Some features we'll describe include batching, requests for partial response, and efficient ways to handle media. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 44:50 More in Science & Technology

    Read the article

  • How to capture the screen in DirectX 9 to a raw bitmap in memory without using D3DXSaveSurfaceToFile

    - by cloudraven
    I know that in OpenGL I can do something like this glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, _width, _height, GL_RGB, GL_UNSIGNED_BYTE, _buffer ); And its pretty fast, I get the raw bitmap in _buffer. When I try to do this in DirectX. Assuming that I have a D3DDevice object I can do something like this if (SUCCEEDED(D3DDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackbuffer))) { HResult hr = D3DXSaveSurfaceToFileA(filename, D3DXIFF_BMP, pBackbuffer, NULL, NULL); But D3DXSaveSurfaceToFile is pretty slow, and I don't need to write the capture to disk anyway, so I was wondering if there was a faster way to do this

    Read the article

  • Best practices for launching a new software version

    - by steve
    I rebuilt a web app to replace a version that we have been using for the last 3-4 years. We have a few thousand clients and a few hundred active users per day. The functionality is basically the same. The new version is a little bit faster with a few enhancement features and there are a lot of behind the scenes changes that the clients will never see. The UI is quite different but ultimately much easier to use and navigate. How should I go about having our clients stop using the old system and start using the new one? I am currently putting together a video that will play on the web site as well as within the app. The video will go through the pages and focus on some key changes. I was also thinking about an intro page that will display once the user logs in and explains some of the features.

    Read the article

  • Slow wireless about 1/3 bandwidth of wired from cable modem

    - by Rhino
    HP DV6000 Wireless G Broadcom Netgear WNDR3400 router properly configured. Windows laptops using wireless G and N are considerably faster than mine. 12.04 LTS, all restricted drivers and all up to date. 50Mbps wired and 9-21 Mbps wireless on Ubuntu. 00:14.0 Bridge [0680]: NVIDIA Corporation MCP51 Ethernet Controller [10de:0269] (rev a3) Subsystem: Hewlett-Packard Company Presario V6133CL [103c:30b7] Kernel driver in use: forcedeth 03:00.0 Network controller [0280]: Broadcom Corporation BCM4311 802.11a/b/g [14e4:4312] (rev 02) Subsystem: Hewlett-Packard Company Broadcom 802.11a/b/g WLAN [103c:1370] Kernel driver in use: wl

    Read the article

  • Learning low latency C++ and Java?

    - by user997112
    I'm currently in a role where I dont get to write any C++ or Java. However, the role is good because provides me with exposure to the business side (i'm interested in finance). Eventually I would like to get into high frequency trading infrastructure. Therefore, outside of work hours i'd like to maximise the knowledge I can gain about high performance Java and C++. I already have the Java Performance Tuning book, which is ok but not impressive. Can people recommend anymore latency blogs/books/websites for learning about making C++/C/Java or even Unix very fast? Or perhaps making the network parts of the OS (if re-writing Unix components) faster? EDIT: Or perhaps we could make this THE thread for advice on writing fast code

    Read the article

  • Building a Map based WebApp fast?

    - by NLemay
    I want to build a WebApp which is basically a map with points of interest, filters, and a list of those points. Something really similar to AirBnB, or any other map based app. Of course, I could just take Google Maps API and build what's around. But I guess a lot of people already did that, and may be I could use their work to make mine faster. Here's what I need : Adding multiples POI A list of POI that are showed on the map A way to filter POI Most have a behavior to handle a lot of POI Can works on mobile and tablet I already know one template that can do nearly all of this, it is call Bootleaf. But I would like to know if you know others that might work better.

    Read the article

  • How To Create Custom Keyboard Shortcuts For Browser Actions and Extensions in Google Chrome

    - by Chris Hoffman
    Geeks love keyboard shortcuts – they can make you faster and more productive than clicking everything with your mouse. We’ve previously covered keyboard shortcuts for Chrome and other browsers, but you can assign your own custom keyboard shortcuts, too. Google Chrome includes a built-in way to assign custom keyboard shortcuts to your browser extensions. You can also use an extension created by a Google employee to create custom keyboard shortcuts for common browser actions – and less common ones. Image Credit: mikeropology on Flickr (modified) Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Is there a purpose for using pull requests on my own repo if I am the only developper?

    - by marco-fiset
    So I got started with a real project of mine on GitHub and things are going pretty well and ideas are flowing a lot faster than I initially thought. In order to keep things organized, I setup some branches so I can develop different features separatly. Now when I push my branch to GitHub, I have that section where I have two buttons : Pull Request and Compare with the name of the branch I recently pushed to. I understand the purpose of the Compare button but I don't get why I would want to create a pull request on my own repo. Can someone explain me why I would do that? Is it useful to make pull request on my own repo if I am the only developper?

    Read the article

  • How to implement the light trails for a tron game?

    - by Link
    Well I was creating a TRON style game, but had an issue with creating the actual light trails for the game. What I'm doing currently is I have an array the same size as my window in pixel size, implemented like this: int* collision[800][600]; Then when the bike goes on a certain pixel, it is marked with a 1 for traveled on. However what is the most efficient way to create a working light trail display? I tried to do something like this: int i, j; for(i=0; i<800; i++) for(j=0; j<600; j++) if(*collision[i][j] == 1) Image::applySurface(i, j, trailSurface, gameScreen); But it isn't working properly? It just fills the whole screen with a sprite instead. Whats a better/faster/working way to do this?

    Read the article

  • Designing call center applications, what to consider.

    - by Espen Schulstad
    We have customers calling in to place orders. What sort of considerations should I make when building a call center application. Speed is a factor here. We had a powerbuilder application that was extremly fast for a trained user. We want to have the same sort of speed in our new production system. So some thoughts I've made are: Hotkeys are important. Is it faster to use a "wizard", step by step, or should I try to place everything important about the order logically on one sceen and have another screen where you do all searches, pertinent for that order?

    Read the article

  • At what point is asynchronous reading of disk I/O more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • Slow transfer to external USB3 hard drive

    - by JMP
    Trying to backup data from hard drive before reloading windows following some issue with its load. Having trouble with the file transfer to a USB3/2 external hard drive NTFS. Getting transfer speed of about 116.7kB/sec. In other words its taking about 5 hours to transfer 1.4GB. I've got about 80GB to go. So the transfer is going to take 11days. Seems a little on the slow side. Am I missing something? Is there a way to make this faster. No issue with the external drive transferring this amount in windows. But don't have that option at the moment.

    Read the article

  • For business information and web traffic T4 and Solaris 11 stand head and shoulders above the crowd

    - by rituchhibber
    Everyone is talking about encryption of business information and web traffic. T4 and Solaris 11 stand head and shoulders above the crowd. Each T4 chip has 8 crypto accelerators inside the chip - that means there are 32 in a T4-4.  These are faster and offer more algorithms than almost all standalone devices and it is all free with T4!  What are you waiting for?Please contact Lucy Hillman or Graham Scattergood for more details.Your weekly tea time soundbite of the latest UK news, updates and initiatives on the SPARC T Series servers. T4 good news, best practice and feedback is always welcome.

    Read the article

  • Rules of Holes #3 -A Better Shovel is NOT the Answer!

    - by ArnieRowland
    You stopped digging. You looked around and saw that you were still in the Hole. You needed to get out. AHA! Problem solved, you thought. You'll just get a better and more efficient shovel! Sorry, I have to tell you that switching to a more efficient shovel is unlikely to help you get out of the Hole. Yes, your resumed digging may be faster, more directed, and even well planned and articulated. But you will still be in the Hole, and digging. And that's just not the solution. A new process (scrum,...(read more)

    Read the article

  • How do I optimize searching for the nearest point?

    - by Rootosaurus
    For a little project of mine I'm trying to implement a space colonization algorithm in order to grow trees. The current implementation of this algorithm works fine. But I have to optimize the whole thing in order to make it generate faster. I work with 1 to 300K of random attraction points to generate one tree, and it takes a lot of time to compute and compare distances between attraction points and tree node in order to keep only the closest treenode for an attraction point. So I was wondering if some solutions exist (I know they must exist) in order to avoid the time loss looping on each tree node for each attraction point to find the closest... and so on until the tree is finished.

    Read the article

  • Using High Level Abstractions

    - by Jonn
    I'm not sure if I'm using the correct term, but would you program using High-level abstractions like Powerbuilder, or some CMS like MODx or DotNetNuke? I haven't dabbled in any of these yet. The reason I'm asking is that I kind of feel intimidated by the whole notion of using any abstraction over the languages I'm using. I'm thinking that my job might be over-simplified. While it may provide business solutions faster, I'd rather be coding straight from, in my case, .NET. Do/Would you use abstractions like these or prefer them over programming in lower level languages?

    Read the article

  • International multi-OS keyboard layout for both coding and surfing?

    - by Nikolai Prokoschenko
    So yes, the problem has been raised in parts multiple times already. Still I'm looking for a keyboard layout that has the following features: Easy on fingers (Dvorak-like layouts welcome) Easy for coding Includes German characters (typing ä with AltGr-p is not ok). Works well with web-browsing (Ctrl-t and Ctrl-w on one hand, left one very much preferred, since that's where my ex-CapsLock, now Ctrl lies) Works well with default Emacs bindings Works on both Windows and Linux (at least easily installable) I've looked at Dvorak and Neo, they both have a "shortcut problem", i.e. web-browsing and most frequent Emacs combinations use both parts of the keyboard. Using right Ctrl is usually not an option, since it'll give me RSI much faster than keeping QWERTY/Z. Funnily enough, mirroring the default Neo layout would probably be enough for me. So, any ideas?

    Read the article

  • Migrate from Thunderbird to Mutt

    - by deshmukh
    I am contemplating moving from Thunderbird to Mutt (provided it is feasible) to move to a faster, simpler application. My current Thunderbird set-up consists of multiple IMAP accounts (gmail and google apps). Only selected folders (read labels) in each IMAP account are stored locally. For all other folders, I glance through the headers and open a message only if I find it interesting. I also use folder bookmarks to navigate to folders quickly. I also move messages across folders with keyboard shortcuts. Is it possible to replicate the set-up in Mutt? Can someone share/ point to a sample muttrc file that does the same thing? It would be great if the muttrc file is adequately commented. On a side note, will it also be possible to import my messages from Thunderbird locally? That will save me considerable network traffic (about 2GB data stored locally).

    Read the article

  • Atheros AR9285 wireless extremely slow

    - by ignacio
    I recently upgraded to Ubuntu 12.10, and things were going great.. But suddenly, the wifi connection went extremely slow. I have a 20 M connection and normally download files al 1 MB/s or so.. but today i dont get any faster than 120 KB/s. Also, if i use wired connection, the speed goes normal. Following some advice on the net i changed network-manager with wicd, but the issue hasn't gone away Any clues? PD. my wireless card is Atheros AR9285

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >