Search Results

Search found 11913 results on 477 pages for 'fail fast fail early'.

Page 150/477 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Install/upgrade ubuntu from another system

    - by Samarth Agarwal
    I have a new Laptop with latest Ubuntu preinstalled on it, its 12.04. I have another laptop with Ubuntu 10.04 on it. What I lack is a fast internet connection. I want to upgrade my Ubuntu 10 laptop to a ubuntu 12 version. How is this possible without using internet connection? Can I move/copy the installation from the new laptop to the older one? Is there a way so that the newer laptop can upgrade the older one using a usb disk or dvd/cd?

    Read the article

  • How can I exclude content in my notifications bar from being indexed?

    - by Liam E-p
    Of course I want my content to be indexed pretty fast by search engines, however not my notifications bar. My notifications bar contains the last 30 changes to content on the site, and I don't want this to show in my SEO meta. As all the notifications are generic, it often doesn't provide any relevant information. As I said the notifications are generic. If an article named "123" was created, it would create a notification that says "Article "123" was created by xxx at 12:00AM". I'm now wondering if this is a content design problem. As only 1/3 of this information is actually relevant to users (the title, what happened). By SEO meta, and irrelevant notification data being shown, I mean this - Basically what I was wondering, is how I could optimise this, so search engines wouldn't show this generic nonsense.

    Read the article

  • Choosing the right version control system for .NET projects [closed]

    - by madxpol
    I'm getting ready for my first "bigger" .NET project (ASP.NET MVC 3/4) on which I'm going to lead another 2 programmers and right now I'm choosing the right version control system for the job (plus I'm gonna use it for my future development too). My problem is that I did't use any version control system before, so I would like it to have as fast learning curve and intuitive merging as possible. So far I quickly looked at VisualSVN (I like the Visual Studio integration in it), but I'm reading everywhere how Git is awesome and dunno which one to choose (not limited to these two).. Maybe I'm ovethinking this but I like when everything goes smoothly:) I'd like to hear some opinions from people who used multiple version control systems (preferably on VS projects) what do you think is the less complicated and effective version control system for such a use (one to 5 man projects)?

    Read the article

  • Are there any specific workflows or design patterns that are commonly used to create large functional programming applications?

    - by Andrew
    I have been exploring Clojure for a while now, although I haven't used it on any nontrivial projects. Basically, I have just been getting comfortable with the syntax and some of the idioms. Coming from an OOP background, with Clojure being the first functional language that I have looked very much into, I'm naturally not as comfortable with the functional way of doing things. That said, are there any specific workflows or design patterns that are common with creating large functional applications? I'd really like to start using functional programming "for real", but I'm afraid that with my current lack of expertise, it would result in an epic fail. The "Gang of Four" is such a standard for OO programmers, but is there anything similar that is more directed at the functional paradigm? Most of the resources that I have found have great programming nuggets, but they don't step back to give a broader, more architectural look.

    Read the article

  • Coherence Webcast for Developers July 11

    - by jeckels
    Coming on July 11th, we look forward to having you join us for a special Coherence webcast - just for developers! Want to learn how you, the developer, can make applications Big Data and Fast data ready? Want to be able to customize and manage your applications and services to provide real-time data and processing with ease? Then this webcast is for you. Coherence Live Webcast Developers: Deploy Highly-Available Custom Services on Your Data Grid Products July 11, 10am Pacific Time >> Register now! <<  (of course, it's free)Join Brian Oliver of the Coherence team to see how you can create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences. We look forward to having you join us.

    Read the article

  • Programmer logbook application?

    - by jsoldi
    I've just released my application to the public, and I'm working on an updated version, but I really think I should keep track of ALL the code changes. In case some functionality suddenly starts failing, with a history of all the changes I made it would be a lot easier to figure out where I messed it up, in case the problem wasn't already there. The ideal would be to have a super fast computer with a huge hard drive and an application that automatically saves a backup of the whole project every time I change a line in the code, with some file comparison tool that would show me every difference between any two backed up projects, but that's not really possible for now. So, do you know any application that makes it easy for a programmer to keep track of the changes made to the source code?

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • what's best language to mate with Adobe Flex-based GUI for math crunching?

    - by gkdsp
    Hi, I'm not a software expert but need to outsource a web-based scientific GUI application, and I'm considering Adobe Flex. My math routines are currently in Javascript and C/C+. Having no experience with Flex, was hoping someone could help me understand what options are available for performing (preferably fast and efficient) CLIENT-side calculations. That is, can Flex interact with Javascript and/or C easily? If not, is actionscript or other language preferred? Downsides/tradeoffs? Need functions like LOG10, LN, SQRT, and would be nice to also have the error function (ERF) and complementary error function (ERFC), although I may be able to derive these last two from more basic functions if necessary. Thanks!

    Read the article

  • Mounting Samba share whenever it's available, unmounting when it's not

    - by Laurynas Biveinis
    I am trying setup permanent samba share mounts. That's not too hard using these instructions. But, I want them to Automatically remount whenever I join the network where these shares are available. Automatically unmount (or make access requests fail immediately instead of hanging) whenever I leave the network, i.e. avoid this automatically. Googling suggests that AutoFS might be helpful. I gather it takes care of the 1. above but I am not sure about the 2. The other questions about automated Samba mounts, i.e. How to mount a samba share permanently?, do not seem to address automatic remounts/unmounts, so I think this is not a duplicate. Thanks.

    Read the article

  • ASP.NET Reports: How To Setup A Master Detail Report

    Check out this How-to Setup An ASP.NET Master-Detail Report video. The screencast shows easy it is to add master-detail information using the ASP.NET XtraReports Suite: The video pace is not too fast and covers what you need to build your first master-detail report. The video also builds on the previous ASP.NET Data-Aware Report. But dont worry, I cover that in the video too. Watch the How-to Setup An ASP.NET Master-Detail Report video and then drop me a line below with your thoughts. Thanks!...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Problems installing 13.10 inside VMWare Player 4

    - by Thomas S.
    In the past I had no problems installing Ubuntu 12.04 inside VMWare Player 4.0.4 on my Windows 7 Pro machine. Now I installed Ubuntu 13.10 and have a couple of problems: the default screensize (even during the installation) is gigantic I've told Ubuntu to automatically login - after reboot the gigantic screen occurs without showing anything, without doing anything on mouse clicks, Ctrl+Alt+Backspace does not work switching to console using Ctrl+Alt+F1...F6 works, but is incredible slow invoking 'sudo reboot now' does not succeed, it prints Killing all remining processes... [fail] Restoring resolver state... [ OK ] Will now switch to single-user mode root@ubuntu1310:~# invoking now 'shutdown now' will soon show the same error. What I need to do to get Ubuntu 13.10 up and running in VMWare Player?

    Read the article

  • Can't single boot Ubuntu 13.04 64 bit

    - by stanleyhunk
    I'm new to Ubuntu, I bought an Asus A45VS laptop recently pre-installed with Windows 8, but I have already uninstalled it and wipef the whole HDD. I plan to install Ubuntu 13.04 64 bit on it. I have tried several times to install and uninstall Ubuntu again and again with boot-able USB, but it still fail to boot. All the installation process go fine, after rebooting my laptop, it just stick to the purple screen. Then I boot it with USB again, tried boot-repair, tried make an EFI partition, still the same. I have searched on the web, and all of them was about dual booting with windows 7 or windows 8, I don't wish to do dual booting as I wish to have single OS which is Ubuntu on this laptop. please help, thanks in advance.

    Read the article

  • Friday Fun: Dynamite Train

    - by Asian Angel
    This week’s game involves an ‘explosive’ combination of trains, bridges, and dynamite! Your mission is to stop these trains from crossing the various bridges using ingenuity and a limited supply of explosives. Can you destroy all the bridge designs and building materials you encounter or will your carefully thought out plans of destruction fail? HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks

    Read the article

  • APress Deal of the Day 9/Jan/2011 - Pro Silverlight 3 in VB

    - by TATWORTH
    Today's $10 deal from APress at http://www.apress.com/info/dailydeal is Pro Silverlight 3 in VB Silverlight is a lightweight browser plug-in that frees your code from the traditional confines of the browser. It's a rules-changing, groundbreaking technology that allows you to run rich client applications right inside the browser. Even more impressively, it's able to host true .NET applications in non-Microsoft browsers (like Firefox) and on non-Microsoft platforms (like Mac OS X). Silverlight is still new and evolving fast, and you need a reliable guidebook to make sense of it.

    Read the article

  • Sales tracker that allows complex queries?

    - by feklee
    On a site, every click on a product should be registered by a sales tracker: price, type, etc. The sales tracker should provide an API so that complex queries can be performed, such as: Which products of a type "teapot" had a price below 20 EUR? Requirements: Recorded data should be available for querying no later than two hours after it has been recorded. For example, there are reports that Google Analytics may take up to 24h to update data. That is not acceptable. Querying doesn't need to be fast, but recording does (of course). Which sales tracker allows complex queries against collected data?

    Read the article

  • Do search engines crawl PDFs and if so are there any rules to follow when making them

    - by RandomBen
    The website I am working on has a few hundred PDFs in it. I don't think I have ever seen any of them come back in a search but there are linked to directly from out site. They are also full of keywords because they are product documents. Is there anything special we need to do to get Google or other search engines to crawl them? Is there any hard and fast rules for making PDFs to help Google like them more? For instance should I run them through ghostscript to clean up broken PDF tags that Adobe creates during generation?

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Java is very slow on my laptop

    - by Ryan McClure
    I have 1.6.0_30 JRE on my 11.10 install. I have 3 GB of RAM and an Intel Core2 Duo CPU T6600 @ 2.20GHz × 2. Whenever I use my Java to play a game, the Java runs at about 4-5 FPS. When I used Windows, I found that I could get around 40 FPS. I'm not too terribly worried about this, but are there settings that I can tweak that I don't know about? If not, why is it that JRE Java can't do as much on Ubuntu as it can on Windows? Also, this may be related but I'm not too sure--My fan runs very fast when running a Java application. Is there a correlation?

    Read the article

  • Fastest way to set up a JSON server on my local machine [closed]

    - by Mohsen
    I am a front-end developer. For many experiements I do I need to have a server that talks JSON with my client side app. Normally that server is a simple server that response to my POSTs and GETs. For example I need to setup a server that saves, modifies and read data from a "library" database like this: POST /books create a book GET /book/:id gets a book and so on... What is the fastest to set up and easiest technology stack for database and server in this case? I am open to use Ruby, Nodejs and anything that do the job fast and easy. Is there any framework (on any language) that do stuff like this for me?

    Read the article

  • Very different I/O performance in C++ on Windows

    - by Mr.Gate
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • New computer - AMD with which mobo? [closed]

    - by RhZ
    I need to buy new computers for the office, all running ubuntu 10.04 or 11.10, whichever works. I am looking at a asus mobo with the AMD870 northbridge and 850 sounthbridge. Can anyone tell me if that is buggy or not? And with maybe an Athlon II X4 640 processor. At home I am running an asus mobo with AMD880/SB850 at home, which is good, although the on board ATI video card was buggy, I put a nvida card in as well and its great now. But for the office I want to save cost, don't need a kick-ill system. Still, I need the machines to be fast and look good, don't want to skimp on performance. Can anyone provide me with some advice about this? I will buy a custom machine, not from one of the big manufacturers. Thanks! :-)

    Read the article

  • Deprecate a web API: Best Practices?

    - by TheLQ
    Eventually you need to depreciate parts of your public web API. However I'm confused on what would be the best way to do it. If you have a large 3rd party app base just yanking old versions of the API seems like the wrong way to do it as almost all apps would fail overnight. However you can't keep ancient web api's available forever as it might be outdated or there are significant changes that make working with it impossible. What are some best practices for deprecating old web api's?

    Read the article

  • Ubuntu 14.04 doesn't detect my discrete GPU

    - by user258887
    I recently purchased a laptop with an Nvidia GeForce 860m, and have installed Ubuntu 14.04. On my old laptop I had 12.04, which automatically filled Additional Drivers with Nvidia drivers. But on this computer, the only thing in Additional Drivers is Qualcomm. So I manually installed Nvidia, but X Server Settings doesn't seem to detect any GPU... lspci | grep VGA reports only my integrated Intel GPU, but lspci -v reports many things, including the Nvidia GPU: 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2) Subsystem: ASUSTeK Computer Inc. Device 157d Flags: fast devsel, IRQ 16 Memory at ec000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at ed000000 [disabled] [size=512K] Capabilities: access denied Don't know what any of that means. Not sure if it's supposed to say 'access denied'... I need my GPU to do CUDA and OpenGL programming. What else can I do to figure out why this isn't working?

    Read the article

  • Simple solution now to a problem from 8 years ago. Use SQL windowing function

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/06/10/simple-solution-now-to-a-problem-from-8-years-ago.aspxI remember having this problem 8 years ago. We had to find the top 5 donor per month and send out some awards. The SQL we came up with was clunky and had lots of limitation (can only do one year at a time), then switch the where clause and go again. Fast forward 8 years, I got a similar problem where we had to find the top 3 combination of 2 fields for every single day. And the solution is this elegant: SELECT CAST(eff_dt AS DATE) AS "RecordDate" , status_cd , nbr , COUNT(*) AS occurance , ROW_NUMBER() OVER (PARTITION BY CAST(eff_dt AS DATE) ORDER BY COUNT(*) DESC) RowNum FROM table1 WHERE RowNum < 4 GROUP BY CAST(eff_dt AS DATE) , status_cd , nbr If only I had this 8 years ago. :) Life is good now!

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >