Search Results

Search found 75401 results on 3017 pages for 'doing software right'.

Page 181/3017 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Swimlane Diagram Softwares with Expand/Collapse Features

    - by louis xie
    I've been searching real hard for a software which can fulfill my needs, but to no avail. I have a swimlane diagram which is extremely huge, and almost impossible to model using Visio or any traditional swimlane software. I would need to model both the operational process, as well as the interactions within an application and between different applications. Therefore, without wasting additional effort modelling these separately, I am looking for a solution which I can combine both views together. That is, possibly one which I can expand/collapse/group/ungroup processes/subprocesses together. Take a typical credit card process for instance, a hypothetical description of the swimlane could be as such: Customer submits application form to the bank Bank Officer A receives the application form and validates that it was correctly filled Bank Officer A submits application form to Bank Officer B for processing. Bank Officer B checks credit quality of the customer through Application X. Application X submits query to Application Y to retrieve Credit Report. Application X retrieves credit report and submits to Application Z for computation of credit scores Bank Officer B validates that customer is credit worthy, and submits application to Bank Officer C for processing. The above is an over-simplified credit card request process, and a purely hypothetical one. What I'm trying to drive at is, each of the above processes have sub-processes, and I want to be able to switch between a "detailed" view and "aggregated" view. If possible, add in time dependency of the different tasks, as well. I haven't been able to find one such software which could do this.

    Read the article

  • How to determine the right amount of up front design?

    - by Gian
    Software developers occasionally are called upon to write fairly complex bits of software under tight deadlines. Often, it seems like the quickest thing to do is to simply start coding, and solve the problems as they arise. However, this approach can come back to bite you—often costing time or money in the long run! How do we determine the right amount of up front design work? If your work environment actively discourages you from thinking about things up front, how do you handle that? How can we manage risk if we eschew up-front thinking (by choice or under duress) and figure out the problems as they arise? Does the amount of up front design depend entirely on the size or complexity of the task, or is it based on something else?

    Read the article

  • How can I know if programming is right for me?

    - by user66414
    I have an IT background and was pretty confident until an opportunity came up at work to go into programming(C#). I have never programmed before this, and the software I am programming for is a program I have never used before (a 3D modeling software). It has been 6 months since then and I feel like giving up. I didn't get much training... about 3 weeks of training spread out over the last 6 months. I think I would be good at programming but this experience is kinda making me rethink my decision. I'm not sure if it's just me, or if this frustration is normal. How can I tell if programming is right for me?

    Read the article

  • Am I making the right decision to take Information technology/system as my course in college?

    - by 123rainfan
    I am a student who just ended my high school last year. I will be entering college any time between march to August. And.. I am thinking of studying Information technology/System as my course in college. The problem is, I am unsure if this is the right path for me. I don't know if this is what I really want for my future later on! Yes, I do love learning more about computers (prefer software to hardware). But what if I don't find them interesting later on when studying? I'm worried about that as I don't wanna regret later on. To add to that, my knowledge of programming and other software development is actually quite low. Can someone advise me on what should I do? Or tell me more about Information technology (what will I study later on in college and the career path)?

    Read the article

  • How can I figure out if programming is right for me? [closed]

    - by user66414
    I have an IT background and was pretty confident until an opportunity came up at work to go into programming(C#). I have never programmed before this, and the software I am programming for is a program I have never used before (a 3D modeling software). It has been 6 months since then and I feel like giving up. I didn't get much training... about 3 weeks of training spread out over the last 6 months. I think I would be good at programming but this experience is kinda making me rethink my decision. I'm not sure if it's just me, or if this frustration is normal. How can I tell if programming is right for me?

    Read the article

  • Customizing Fantacy Remote .INI file

    - by karthik
    I am using Fantacy Remote to remote view other machines. I have attached the default .INI file that Fantacy Remote uses. When i connect to a machine, the client user should not have mouse and keyboard access of the Remote machine. It should be a View only remote connection. And i want to make the Remote viewer screen to be in full screen mode, because i dont want the user to do anything with menubars of Fatancy remote. Because this is kiosk application. What should i change in configuration file [.ini] inorder to achieve the above ? Anyone who have used this software before, kindly help.. [APP] iVersion= 101 pcVersion=1.01a pcBuildDate=Mar 27 2009 [MAIN] iFirstSetup= 0 rcMain.rcLeft= 676 rcMain.rcTop= 378 rcMain.rcRight= 1004 rcMain.rcBottom= 672 iShowLog= 0 iMode= 1 [GENERAL] iTips= 1 iTrayAnimation= 1 iCheckColor= 1 iPriority= 1 iSsememcpy= 1 iAutoOpenRecv= 1 pcRecvPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\recv pcFileName=FantasyRemote iLanguage= 1 [SERVER] iAcceptVideo= 1 iAcceptAudio= 1 iAcceptInput= 1 iAutoAccept= 1 iAutoTray= 0 iConnectSound= 1 iEnablePassword= 0 pcPassword= pcPort=7902 [CLIENT] iAutoConnect= 0 pcPassword= pcDefaultPort=7902 [NETWORK] pcConnectAddr=192.168.1.1 pcPort=7902 [VIDEO] iEnable= 1 pcFcc=AMV3 pcFccServer= pcDiscription= pcDiscriptionServer= iFps= 30 iMouse= 2 iHalfsize= 0 iCapturblt= 0 iShared= 0 iSharedTime= 5 iVsync= 1 iCodecSendState= 1 iCompress= 2 pcPlugin= iPluginScan= 0 iPluginAspectW= 16 iPluginAspectH= 9 iPluginMouse= 1 iActiveClient= 0 iDesktop1= 1 iDesktop2= 2 iDesktop3= 0 iDesktop4= 3 iScan= 1 iFixW= 16 iFixH= 8 [AUDIO] iEnable= 1 iFps= 30 iVolume= 6 iRecDevice= 0 iPlayDevice= 0 pcSamplesPerSec=44100Hz pcChannels=2ch:Stereo pcBitsPerSample=16bit iRecBuffNum= 150 iPlayBuffNum= 4 [INPUT] iEnable= 1 iFps= 30 iMoe= 0 iAtlTab= 1 [MENU] iAlwaysOnTop= 0 iWindowMode= 0 iFrameSize= 4 iSnap= 1 [HOTKEY] iEnable= 1 key_IDM_HELP=0x00000070 mod_IDM_HELP=0x00000000 key_IDM_ALWAYSONTOP=0x00000071 mod_IDM_ALWAYSONTOP=0x00000000 key_IDM_CONNECT=0x00000072 mod_IDM_CONNECT=0x00000000 key_IDM_DISCONNECT=0x00000073 mod_IDM_DISCONNECT=0x00000000 key_IDM_CONFIG=0x00000000 mod_IDM_CONFIG=0x00000000 key_IDM_CODEC_SELECT=0x00000000 mod_IDM_CODEC_SELECT=0x00000000 key_IDM_CODEC_CONFIG=0x00000000 mod_IDM_CODEC_CONFIG=0x00000000 key_IDM_SIZE_50=0x00000074 mod_IDM_SIZE_50=0x00000000 key_IDM_SIZE_100=0x00000075 mod_IDM_SIZE_100=0x00000000 key_IDM_SIZE_200=0x00000076 mod_IDM_SIZE_200=0x00000000 key_IDM_SIZE_300=0x00000000 mod_IDM_SIZE_300=0x00000000 key_IDM_SIZE_400=0x00000000 mod_IDM_SIZE_400=0x00000000 key_IDM_CAPTUREWINDOW=0x00000077 mod_IDM_CAPTUREWINDOW=0x00000004 key_IDM_REGION=0x00000077 mod_IDM_REGION=0x00000000 key_IDM_DESKTOP1=0x00000078 mod_IDM_DESKTOP1=0x00000000 key_IDM_ACTIVE_MENU=0x00000079 mod_IDM_ACTIVE_MENU=0x00000000 key_IDM_PLUGIN=0x0000007A mod_IDM_PLUGIN=0x00000000 key_IDM_PLUGIN_SCAN=0x00000000 mod_IDM_PLUGIN_SCAN=0x00000000 key_IDM_DESKTOP2=0x00000078 mod_IDM_DESKTOP2=0x00000004 key_IDM_DESKTOP3=0x00000079 mod_IDM_DESKTOP3=0x00000004 key_IDM_DESKTOP4=0x0000007A mod_IDM_DESKTOP4=0x00000004 key_IDM_WINDOW_NORMAL=0x0000000D mod_IDM_WINDOW_NORMAL=0x00000004 key_IDM_WINDOW_NOFRAME=0x0000000D mod_IDM_WINDOW_NOFRAME=0x00000002 key_IDM_WINDOW_FULLSCREEN=0x0000000D mod_IDM_WINDOW_FULLSCREEN=0x00000001 key_IDM_MINIMIZE=0x00000000 mod_IDM_MINIMIZE=0x00000000 key_IDM_MAXIMIZE=0x00000000 mod_IDM_MAXIMIZE=0x00000000 key_IDM_REC_START=0x00000000 mod_IDM_REC_START=0x00000000 key_IDM_REC_STOP=0x00000000 mod_IDM_REC_STOP=0x00000000 key_IDM_SCREENSHOT=0x0000002C mod_IDM_SCREENSHOT=0x00000002 key_IDM_AUDIO_MUTE=0x00000073 mod_IDM_AUDIO_MUTE=0x00000004 key_IDM_AUDIO_VOLUME_DOWN=0x00000074 mod_IDM_AUDIO_VOLUME_DOWN=0x00000004 key_IDM_AUDIO_VOLUME_UP=0x00000075 mod_IDM_AUDIO_VOLUME_UP=0x00000004 key_IDM_CTRLALTDEL=0x00000023 mod_IDM_CTRLALTDEL=0x00000003 key_IDM_QUIT=0x00000000 mod_IDM_QUIT=0x00000000 key_IDM_MENU=0x0000007B mod_IDM_MENU=0x00000000 [OVERLAY] iIndicator= 1 iAlphaBlt= 1 iEnterHide= 0 pcFont=MS UI Gothic [AVI] iSound= 1 iFileSizeLimit= 100000 iPool= 4 iBuffSize= 32 iStartDiskSpaceCheck= 1 iStartDiskSpace= 1000 iRecDiskSpaceCheck= 1 iRecDiskSpace= 100 iCache= 0 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\avi [SCREENSHOT] iSound= 1 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\ss pcPlugin=BMP [CDLG_SERVER] mrcWnd.rcLeft= 667 mrcWnd.rcTop= 415 mrcWnd.rcRight= 1013 mrcWnd.rcBottom= 634 [CWND_CLIENT] miShowLog= 0 m_iOverlayLock= 0

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Is proprietary vendor software required to use a wireless printer?

    - by sajee
    On Windows Vista, other than the driver, one can use a regular printer w/o any extra software, either plugged in via USB or over the network. I don't have to install extra software from the printer vendor to use the basic printer functionality. Is the same true for wireless printers? I need to install the HP 6000 wireless printer, and I hate installing vendor software other than the required drivers. So I'm wondering whether I need to install HP software on every PC that wants to use this printer? I haven't had a chance to read the manual but I'm hoping I don't have to. Can folks that have experience with wireless printers comment on whether vendor software, other than the printer driver, is required to use a wireless printer?

    Read the article

  • Is there a simple context-menu add-in that could make-up for the Windows-7 status bar deficiency?

    - by DanO
    Edit: I initially asked about free disk space and selected item size. It has since been pointed out that the selected item size summary is still availiable natively in the details pane. I had read elsewhere (wikipedia) that this was removed along with disk free space, which is not the case. Only free disk space has been completely removed. Selection size is still availiable. Is there a context menu add-in out there that could show the free disk space of the relevant drive, when you right click? This would go a long way to compensating for one of the only steps backward I’ve discovered in Windows 7 so far. I doubt anyone had created one specifially for this need before windows 7 because this information was previously easily accessible in the status bar. I thought about creating one, but it has been a while since I have messed with the Shell API, and I know there are coders out there who could do it faster and better. If you’ve heard of one, or know of something else to make-up for this Microsoft misstep, I’d appreciate hearing about it. If MS were listing to the community they would already have a powertoy or add-in of some kind to un-break this. (they could release it unsupported even), as there seem to be many power users that are extremely annoyed by this feature removal decision. If anyone has seen something, please post it here. As it has been only 4 days since official Windows 7 release, I'll wait at least a week to chose an answer. Here's a picture of protoype screenshot: SU question 19232 is related.

    Read the article

  • KVM guest io is much slower than host io: is that normal?

    - by Evolver
    I have a Qemu-KVM host system setup on CentOS 6.3. Four 1TB SATA HDDs working in Software RAID10. Guest CentOS 6.3 is installed on separate LVM. People say that they see guest performance almost equal to host performance, but I don't see that. My i/o tests are showing 30-70% slower performance on guest than on host system. I tried to change scheduler (set elevator=deadline on host and elevator=noop on guest), set blkio.weight to 1000 in cgroup, change io to virtio... But none of these changes gave me any significant results. This is a guest .xml config part: <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/vgkvmnode/lv2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> There are my tests: Host system: iozone test # iozone -a -i0 -i1 -i2 -s8G -r64k random random KB reclen write rewrite read reread read write 8388608 64 189930 197436 266786 267254 28644 66642 dd read test: one process and then four simultaneous processes # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct 1073741824 bytes (1.1 GB) copied, 4.23044 s, 254 MB/s # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 14.4528 s, 74.3 MB/s 1073741824 bytes (1.1 GB) copied, 14.562 s, 73.7 MB/s 1073741824 bytes (1.1 GB) copied, 14.6341 s, 73.4 MB/s 1073741824 bytes (1.1 GB) copied, 14.7006 s, 73.0 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 6.2039 s, 173 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 32.7173 s, 32.8 MB/s 1073741824 bytes (1.1 GB) copied, 32.8868 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9097 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9688 s, 32.6 MB/s Guest system: iozone test # iozone -a -i0 -i1 -i2 -s512M -r64k random random KB reclen write rewrite read reread read write 524288 64 93374 154596 141193 149865 21394 46264 dd read test: one process and then four simultaneous processes # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 1073741824 bytes (1.1 GB) copied, 5.04356 s, 213 MB/s # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 24.7348 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7378 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7408 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.744 s, 43.4 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 10.415 s, 103 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 49.8874 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8608 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8693 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.9427 s, 21.5 MB/s I wonder is that normal situation or did I missed something?

    Read the article

  • Why is this one div container blocking the other from floating right?

    - by user2824289
    I know the answer is very simple, it's probably one little CSS property, but I've tried to find the solution without asking it here, no luck.. There are two div containers within a div container, and they aren't playing nice. The one is positioned to float right in the upper righthand corner of the parent div, and it won't let any other container float to the right of it. I tried display:inline and display:inline-block but no luck... Here's the code, though something tells me the answer is so easy you won't need it!: The parent div, the upper righthand corner div, and the poor div trying to float right: #um-home-section4 { width:100%; height:300px; background-color: green; } #um-title-right { float:right; width:500px; height:50px; margin-right:20px; margin-top:20px; background-color: fuchsia; } #take-me-there { float:right; margin-top:240px; margin-right:0px; height:50px; width:100px; background-color: gray; } <div id="um-home-section4"> <div id="um-title-right"></div> <div id="take-me-there"></div> </div>

    Read the article

  • How should I remember what I was doing and why on a project 3 months back?

    - by TheIndependentAquarius
    I was working on this project 3 months back, and then suddenly another urgent project appeared and I was asked to shift my attention there. Now, from tomorrow I'll be heading back to my old project and I realize that I do not remember what exactly was I doing and where to start! I wish to know how to document the project such that anytime I look back it shouldn't take me more than a few minutes to get going from wherever I left!

    Read the article

  • Le code source du moteur derrière Doom 3 est disponible ! id Software publie l'id Tech 4 sous licence GPLv3

    id Software publie les sources de l'ID Tech 4 sous licence GPLv3 Le code source du moteur derrière Doom 3 est disponible ! Il y a quelques mois, Id Software a commercialisé le jeu Rage utilisant la nouvelle version de l'Id Tech. Le studio de développement a pour habitude de libérer la version précédente de son moteur, lorsque le dernier est disponible. Ainsi, aujourd'hui, nous avons accès à la quatrième version de ce fabuleux moteur, ici. Pour rappel, cette version est à l'origine des jeux : Doom 3 Quake IV Prey Enemy Territory : Quake Wars Wolfenstein Brink

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project? EDIT *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?

    Read the article

  • How do you demo software with No UI in the Sprint Review?

    - by Jeff Martin
    We are doing agile software development, basically following Scrum. We are trying to do sprint reviews but finding it difficult. Our software is doing a lot of data processing and the stories often are about changing various rules around this. What are some options for demoing the changes that occurred in the sprint when there isn't a UI or visible workflow change, but instead the change is a subtle business rule on a processing job that can take 10s of minutes or even a couple of hours?

    Read the article

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • Why is my CPU being used while doing nothing?

    - by Jop
    I have installed Ubuntu GNOME in BIOS mode on my MacBook (BIOS mode so that the proprietary NVIDIA drivers work. I need them for gaming.). For some reason, a lot of CPU is being used while not really doing anything. It swings between 20-30% on both cores, usually. But when I look at the list of processes and sort by CPU usage, I do not see anything special. No processes intensively doing anything. How can I fix this? EDIT: Output of top command. jop@jop-MacBook:~$ top top - 17:08:02 up 41 min, 2 users, load average: 0,51, 0,69, 0,95 Tasks: 202 total, 2 running, 200 sleeping, 0 stopped, 0 zombie %Cpu(s): 11,9 us, 5,8 sy, 0,0 ni, 80,3 id, 0,5 wa, 0,0 hi, 1,5 si, 0,0 st KiB Mem: 7908316 total, 2919940 used, 4988376 free, 153248 buffers KiB Swap: 3906244 total, 0 used, 3906244 free, 1326544 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3785 root 20 0 195m 82m 26m S 22,9 1,1 2:43.77 Xorg 4429 jop 20 0 1543m 150m 60m S 7,3 1,9 1:26.26 compiz 4198 jop 20 0 633m 21m 11m S 1,7 0,3 0:04.96 unity-panel-ser 7425 jop 20 0 564m 18m 12m S 1,7 0,2 0:00.84 gnome-terminal 7019 jop 20 0 806m 89m 46m S 1,0 1,2 0:10.01 chrome 7323 jop 20 0 966m 93m 23m S 1,0 1,2 0:06.85 chrome 6742 root 20 0 0 0 0 S 0,7 0,0 0:00.43 kworker/0:3 3 root 20 0 0 0 0 S 0,3 0,0 0:06.01 ksoftirqd/0 7008 root 20 0 0 0 0 S 0,3 0,0 0:00.27 kworker/1:3 7302 jop 20 0 972m 96m 28m S 0,3 1,2 0:06.32 chrome 7310 jop 20 0 382m 63m 39m S 0,3 0,8 0:00.34 chrome 7498 jop 20 0 24840 1600 1120 R 0,3 0,0 0:00.22 top 1 root 20 0 27176 2944 1412 S 0,0 0,0 0:01.58 init 2 root 20 0 0 0 0 S 0,0 0,0 0:00.00 kthreadd 5 root 0 -20 0 0 0 S 0,0 0,0 0:00.00 kworker/0:0H 6 root 20 0 0 0 0 S 0,0 0,0 0:00.00 kworker/u4:0 7 root rt 0 0 0 0 S 0,0 0,0 0:02.04 migration/0 Even when xorg isn't so busy like when I copied, CPU usage is higher then what the processes use.

    Read the article

  • Why doesn't Mozilla release .deb and .rpm packages for their software?

    - by ushabtay
    i use and enjoy Firefox on my Ubuntu 10.04.2 laptop (although Firefox needs work for the Linux/Ubuntu version..) Yet i realize that in comparison to other pieces of software that have an "Ubuntu/Debian" version (.deb file, and usually .rpm files as well), i don't see it in one of the most profound assets of the FLOSS world. The Question is - Why? If Chrom/ium can - why can't they? Easier to get up-to-date software and features and so forth.. cheers,

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Is there a term for quasi-open source proprietary software?

    - by mwhite
    Say a company wants to keep development of new features of a piece of software internal, but wants to make the source code for previous versions public, up to and including existing public features, so that other people can benefit from using and modifying the software themselves, and even possibly contribute changes that can be applied to the development branch. Is there a term for this sort of arrangement, and what is the best way of accomplishing it using existing version control tools and platforms?

    Read the article

  • Exécutez des applications vintage sur votre navigateur grâce à Historical Software Collection, un projet de l'Internet Archive

    Exécutez des applications vintage sur votre navigateur grâce à Historical Software Collection, Un projet de l'Internet Archive La Historical Software Collection est un clin d'oeil aux applications des années 80. Une invite d'Internet Archive a se replonger dans son enfance pour certains le temps d'une partie de jeu vidéo. D'autres y verront une occasion de constater l'évolution technologique de ces 30 dernières années.Grâce à JMESS, un portage de l'émulateur MESS (Multi Emulator Super System)...

    Read the article

  • “It Isn’t Easy At All; Otherwise, Everyone Would Be Doing It”

    - by Kathryn Perry
    A few months ago, JP Saunders (pictured left), who leads the go-to-market initiatives for the Oracle CX Service offering, kicked off a series of articles about modern customer service. He contends that to take care of customers?and the people that support those customers?companies need to make it easy to deliver consistently great experiences. But it’s not easy; it’s an art. The six posts in The Art of Easy series will help you better understand some of the customer service challenges you face and how to avoid common pitfalls. We pulled them all together here in one post for continuity and easy access. Saunders introduces the series with The Art of Easy: Make It Easy To Deliver Great Customer Service Experiences (Part 1). The Art of Easy: Offer Self Service With the Emphasis on Service (Part 2) by David Fulton (pictured left): David Fulton, Director of Product Management, Oracle Service Cloud, shares five tenets of customer self service that move an organization closer to becoming a modern customer service business. Easy Decisions For Complex Problems (Part 3) by Heike Lorenz (pictured right): Heike Lorenz, Director of Global Product Marketing, Policy Automation, writes about automating service policies to ensure that the correct decisions are being applied to the right people. The goal is to nurture the trusted relationships with customers during complex decision-making processes. Moving at the Speed of Easy (Part 4) by Chris Ulmand (pictured left): Chris Omland, Director of Product Management, Oracle Service Cloud, addresses the need for speed to keep up with customers’ expectations. His advice—start with a platform that enables agile innovation, respects a company’s unique needs, and has proven reliability to protect customer relationships. Knowledge Makes It Easy For Everyone (Part 5) by Nav Chakravarti (pictured rig: Vice President Nav Chakravarti, Oracle Service Cloud, talks about managing the knowledge that customers need and want. He coaches readers on delivering answers to customers’ questions easily, in context, with relevance, reliably, and accurately. Making Easy, Both Effective and Efficient (Part 6) by Melinda Uhland (pictured left): Melinda Uhland, Oracle CX Product Management teaches us that happy agents produce happy customers. A Modern Customer Service organization is one that invests in its agents and empowers them with tools to make them efficient and effective, which, in turn, improves customer results.

    Read the article

  • Multiplication for MVP matrices: Any benefits to doing so within the vertex shader?

    - by Nick Wiggill
    I'd like to understand under what circumstances (if any) it is worth doing MVP matrix multiplication inside a vertex shader. The vertex shader is run once per vertex, and a single mesh typically contains many vertices. All MVP inputs remain the same for each vertex in the vertex batch relating to a given draw call (model). Surely then, you're always better off keeping the multiplications in the client code, such that you pass in the whole MVP precalculated as a uniform? (avoiding redundant ops between individual vertices)

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >