Search Results

Search found 30764 results on 1231 pages for 'copy control'.

Page 78/1231 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Execute code at specific intervals, only once?

    - by Mathias Lykkegaard Lorenzen
    I am having an issue with XNA, where I want to execute some code in my Update method, but only at a given interval, and only once. I would like to avoid booleans to check if I've already called it once, if possible. My code is here: if ((gameTime.TotalGameTime.TotalMilliseconds % 500) == 0) { Caret.Visible = !Caret.Visible; } As you may have guessed, it's for a TextBox control, to animate the caret between invisible and visible states. I just have reason to believe that it is called twice or maybe even 3 times in a single update-call, which is bad, and makes it look unstable and jumpy.

    Read the article

  • Why does Clang/LLVM warn me about using default in a switch statement where all enumerated cases are covered?

    - by Thomas Catterall
    Consider the following enum and switch statement: typedef enum { MaskValueUno, MaskValueDos } testingMask; void myFunction(testingMask theMask) { switch theMask { case MaskValueUno: {}// deal with it case MaskValueDos: {}// deal with it default: {} //deal with an unexpected or uninitialized value } }; I'm an Objective-C programmer, but I've written this in pure C for a wider audience. Clang/LLVM 4.1 with -Weverything warns me at the default line: Default label in switch which covers all enumeration values Now, I can sort of see why this is there: in a perfect world, the only values entering in the argument theMask would be in the enum, so no default is necessary. But what if some hack comes along and throws an uninitialized int into my beautiful function? My function will be provided as a drop in library, and I have no control over what could go in there. Using default is a very neat way of handling this. Why do the LLVM gods deem this behaviour unworthy of their infernal device? Should I be preceding this by an if statement to check the argument?

    Read the article

  • Encourage the use of markup files as documentation in enterprise [closed]

    - by linquize
    To make it eaiser to do version control and diff files of documentation, use markup files, such as HTML: html/xhtml, XML: docbook, Wiki: markdown to replace doc/docx. docx is too complex and lengthy. For html, no extra document generation required. Programmers can write html directly and end users / managers can use any web browsers to view the document. For custom XML or Wiki formats, viewers are required to view the document or converters are used to export to pdf/doc. Is such move becoming popular in enterprise context? Why or why not?

    Read the article

  • git in non-distributed, independent, lone programming ...best practice(s) ?

    - by explorest
    I am currently studying the git documentation to get a hang of distributed version control workflow and use of git command line. I want to first start using git with small, personal, pet projects so to gain experience before doing it on large scale (i.e., bigger projects, team dev). What areas of the git system should I, as a lone player, devote most of my study time to... what parts should I leave for the larger scale work later on. In other words what features of the git system will fully be grasped in team work only, and therefore should not be too involved with at an individual level?

    Read the article

  • How do you avoid working on the wrong branch?

    - by henginy
    Being careful is usually enough to prevent problems, but sometimes I need to double check the branch I'm working on (e.g. "hmm... I'm in the dev branch, right?") by checking the source control path of a random file. In looking for an easier way, I thought of naming the solution files accordingly (e.g. MySolution_Dev.sln) but with different file names in each branch, I can't merge the solution files. It's not that big of a deal but are there any methods or "small tricks" you use to quickly ensure you're in the correct branch? I'm using Visual Studio 2010 with TFS 2008.

    Read the article

  • ALSA mixer applet?

    - by David Given
    I have recently installed Narwhal. Everything seems to look good so far, but as usual sound via PulseAudio is deeply unsatisfactory; scratchy sound from Skype, choppy sound from command line apps, etc. So I've removed it, and sound now works fine. Unfortunately I now discover that the standard Gnome mixer applet has changed to being PulseAudio-only, and trying to run it just hangs waiting for a PulseAudio connection. Does anyone know of a replacement volume control applet that I could use which manipulates the ALSA mixer directly? I've found xfce4-volumed, which handles the hotkeys very nicely (once I disable the keyboard shortcuts in Gnome), but of course the XFCE4 mixer applet isn't compatible with Gnome so I can't use that; and there appears to be nothing else readily apparent...

    Read the article

  • Bitbucket and a small development house

    - by Marlon
    I am in the process of finally rolling Mercurial as our version control system at work. This is a huge deal for everyone as, shockingly, they have never used a VCS. After months of putting the bug in management's ears, they finally saw the light and now realise how much better it is than working with a network of shared folders! In the process of rolling this out, I am thinking of different strategies to manage our stuff and I am leaning towards using Bitbucket as our "central" repository. The projects in Bitbucket will solely be private projects and everyone will push and pull from there. I am open to different suggestions, but has anyone got a similar setup? If so, what caveats have you encountered?

    Read the article

  • Are these GitHub features implemented in BitBucket?

    - by doug
    I recently joined a company that, while using git for version control, uses BitBucket as remote/master + git interface for projects. This is my first exposure to BitBucket. There are a couple of GitHub features I rely heavily on in my daily workflow and I am trying to find their counterpart in BitBucket or else how I can recreate the same functionality if it is not provided out-of-the-box. In particular, in GitHub I rely heavily on tags (which I realize reside in git) to link commits to issues (feature request, bug report, etc.); in addition, given projects specs are often decomposed into milestones, I use the milestone feature in GitHub Issues to track progress towards our project milestones (ie, in GitHub a milestone is comprised of a sequence of issues, and the commit tagged with the last remaining issue under that Milestone, causes that Milestone to be annotated as completed. I suspect this workflow can be recreated using Jira, which my new employer also uses, but before trying that, I want to learn if it's already implemented and I just can't find it.

    Read the article

  • Help my graphists sharing their work

    - by Andy M
    As a developer I'm used to Subversion for source control and I think it's great for sharing source code between developers. Now thinking about my graphists and game designers, they need to have a slightly different approach I think. They need to share binary files They need to be able to have a thumbnail and preview of their work I don't want to include their binaries into my game repository (would be much too heavy for developer when updating) I've seen that some graphists uses personally created website to share their work but I was wondering if some "standard" application existed in order to provide my graphists a cool way of working together. Is there a common way of dealing with this? Is the way I want to do (only final sprites on my game repo) correct? How do you guys do this as game developers?

    Read the article

  • Workflow of sharing code for small teams

    - by Mihalis Bagos
    Problem is, we have developed a small CMS, that is different per implementation (currently). Of course development of this is never complete. Sometimes, we are working on more than one project that implements it (by copying-pasting the code files of the CMS to each project), and we add a new feature that we want to share on other projects as well (these can be small ones too, ie a custom ajax JSON controller - we use MVC) What we want to do is quickly and uniformly share the code with all other projects, via a version control system (or something similar), and generally organize the workflow as we know this isn't a very good workflow that we have. What would you suggest? Also, at the momment, the software we use is Visual Studio 2010, so we are strongly considering TFS, but even if we get it we still don't know the ideal workflow, or even if TFS supports what we want to do. Edit: Also note, we have specific implementations that have modifications over the CMS base that we want to KEEP only in the project area. (ie: a specific feature that we DONT want to share with the base CMS code)

    Read the article

  • Function keys on an external keyboard

    - by asymptotically
    So I bought a keyboard for my laptop. Unfortunately, it doesn't have the function key (though I know many people say it's useless). On my laptop, I control volume with the function key and F9-11. How can I get the same functionality on my external keyboard? The advanced keyboard settings don't have an option related to the function key. More specifically, it would be great if I could map it to my 'Menu' key which I'm never going to use. Or is there a way to get full functionality without it?

    Read the article

  • Cruise Control and Mercurial. Setup problems

    - by Bernard Larouche
    I am completely a newbie in continuous integration but I am trying to learn it. Here is my set up. I have a main production location. Computer A. I have a virtual machine hosting Windows Server 2008 OS which have Cruise Control.NET installed and running. Computer B I use Mercurial as my source control tool installed in Computer A. Everything is configured and I can see Cruise Control report on my Computer B Web server and I can me my repository on Computer A Web server. The problem is the following : From my Computer B's Cruise Control console I have the following error : Source control failure : Unable to execute file [c:\develop\CCnet\WorkingDir\hg]. The file may not exist or may not be executable. File not found 'C:\develop\CCnet\CCnetrtifacts\msbuils-results.xml. What I did is that I copied my project directory on the Cruise Control computer c:\develop\CCnet\WorkingDir\ and Here is my source control block included in my ccnet.config : <sourcecontrol type="hg"> <repo>http://mylocalmachinewebserver<repo> <workingDirectory>c:\develop\CCnet\WorkingDirectory<workingDirectory> </sourcecontrol> Could someone help me understand my problem Many thanks

    Read the article

  • OpenCV: How to copy CvSeq data into CvMat?

    - by Can Bal
    I have a CvSeq structure at hand, which is the output of an available OpenCV function. This holds 128 bytes of data in each of the sequence elements. I want to copy each of these 128-byte elements into rows of a CvMat structure to form a N-by-128 of type CV_32FC1. What would be the most efficient way to do this? I thought of using memcpy but I couldn't come up with a working solution. For the details, I want to calculate the SURF features in an image by cvExtractSURF() function, and copy the SURF descriptors into a matrix for passing it to the cvKMeans2().

    Read the article

  • Change present working directory of a calling shell from a ruby script

    - by Erik Kastman
    I'm writing a simple ruby sandbox command-line utility to copy and unzip directories from a remote filesystem to a local scratch directory in order to unzip them and let users edit the files. I'm using Dir.mktmpdir as the default scratch directory, which gives a really ugly path (for example: /var/folders/zz/zzzivhrRnAmviuee+++1vE+++yo/-Tmp-/d20100311-70034-abz5zj) I'd like the last action of the copy-and-unzip script to cd the calling shell into the new scratch directory so people can access it easily, but I can't figure out how to change the PWD of the calling shell. One possibility is to have the utility print out the new path to stdout and then run the script as part of a subshell (i.e. cd $(sandbox my_dir) ), but I want to print out progress on the copy-and-unzipping since it can take up to 10 minutes, so this won't work. Should I just have it go to a pre-determined, easy-to-find scratch directory? Does anyone have a better suggestion? Thanks in advance for your help. -Erik

    Read the article

  • ClearCase: Copy old versions with Snapshot Views under Windows

    - by cogmios
    Using IBM Rational ClearCase: - I have only access to Snapshot Views so NO dynamic Views I want to copy ALL versions from a certain changeset to c:\temp. I have already listed the changeset versions in a file (couple of hundred of versions, I only need the latest one), I do not have a baseline over this older set. What I now have and does not work: #!/usr/bin/perl -w # # PROGRAM: copytest.pl $filename = "Design test123.doc"; $view = "D:\\AdminViews\\ABC_R1_READ_2\\ABCD002\\ABC_DESIGN\\BLA Framework\\P0\\"; $version = "\\main\\ABC_R1_READ\\1"; $printhet = 'cleartool find . -name "' . $filename . '" -version version(' . $version. ') -exec "cmd /c copy %CLEARCASE_XPN% D:\temp\%CLEARCASE_PN%"'; system($printhet);

    Read the article

  • Unlocking SVN working copy with unversioned resources

    - by Vijay Dev
    I have a repository which contains some unversioned directories and files. The server running svn was recently changed and since the checkout was done using the url svn://OLD-IP, I relocated my svn working copy, this time to the url svn://NEW-DOMAIN-NAME. Now since there are some unversioned resources, the switch did not happen properly and the working copy got locked. A cleanup operation did not work either because of these unversioned resources. I looked up in the net and found about svn ignore and tried that but to no use. I am unable to release all locks. Any ideas on solving the problem? Once I release the locks, I believe I can use svn ignore and carry on the relocate operation.

    Read the article

  • XAML Serialization object not using asp.net shadow copy

    - by mrwayne
    Hi, I'm having a problem where i use the XAML serializer / deserializer for a configuration type file that i have. The problem that i'm getting, is that the XAML serializer is returning objects from the assembly in the /Bin directory, while the rest of the web application is using assembly's stored in the ..../Temporary Files/.. directory. Is there any way to prevent this from happening? Is this a bug in the XAML serializer / assembly loading routines? Every time i compile i need to stop and start the asp.net application so the shadow copy and the bin are exactly the same file. Even when not making a change to the dll and recompiling still causes the problem. Any thoughts on how to get around this problem? Currently i've tried turning shadow copy off, but then i have the same problem of needing to shut down / start up the web app every time i compile. Help!

    Read the article

  • Assinging a GD reference to a new variable fails to copy

    - by Stomped
    This is a contrived example, but it illustrates my problem much more concisely then the code I'm using - and I've tested this and it exhibits the problem: $image = imagecreatefromjpeg('test.jpg'); $copy_of_image = $image; // The important bit imagedestroy($image); header('Content-type: image/jpeg'); imagejpeg($copy_of_image); Now, my expectation is that $copy_of_image is exactly that, but when I run this, it fails, printing out the URL of the script of all things. Comment out the imagedestroy() and it works just fine. a var_dump of $image provides: resource(3) of type (gd) So why can't I copy this? Apparently the assignment $copy_of_image = $image is creating a reference rather then a copy - is there a way to prevent that?

    Read the article

  • Copy hashtable to another hashtable using c++

    - by zengr
    I am starting with c++ and need to know, what should be the approach to copy one hashtable to another hashtable in C++? We can easily do this in java using: HashMap copyOfOriginal=new HashMap(original); But what about C++? How should I go about it? UPDATE Well, I am doing it at a very basic level,perhaps the java example was a wrong one to give. This is what I am trying to implement using C++: I have this hash array and each element of the array point to the head of a linked list. Which has it's individual nodes (data and next pointer). And now, I need to copy the complete hash array and the linked list each node is pointing to.

    Read the article

  • Python Windows File Copy with Wildcard Support

    - by Wang Dingwei
    I've been doing this all the time: result = subprocess.call(['copy', '123*.xml', 'out_folder\\.', '/y']) if result == 0: do_something() else: do_something_else() Until today I started to look into pywin32 modules, then I saw functions like win32file.CopyFiles(), but then I found it may not support copying files to a directory. Maybe this functionality is hidden somewhere, but I haven't found it yet. I've also tried "glob" and "shutil" combination, but "glob" is incredibly slow if there are many files. So, how do you emulate this Windows command with Python? copy 123*.xml out_folder\. /y

    Read the article

  • QObject cloning

    - by Olorin
    I know that Qobjects are supposed to be identities not values eg you cannot copy them and by default the copy constructor and asignment are disabled as explained in qt documentation. But is it possible to create a new Qobject from an existing one using a clone method? Would this be a logic error ? If i say QObject b; QObject a; b.cloneFrom(a); or QObject a = new QBject(); QObject b = new QBject(); b->cloneFrom(a); and the clone method copyes stuff like members etc would this be wrong? And if this is ok can i write my own copy constructor and asignment operator that does just that? Note: i actually want to try this with classes that inherit qobject.

    Read the article

  • Dynamically add files to visual studio deployment project.

    - by Graeme Yeo
    I've been desperately looking for the answer to this and I feel I'm missing something obvious. I need to copy a folder full of data files into the TARGETDIR of my deployment project at compile time. I can see how I would add individual files (ie. right click in File System and go to Add-File) but I have a folder full of data files which constantly get added to. I'd prefer not to have to add the new files each time I compile. I have tried using a PreBuildEvent to copy the files: copy $(ProjectDir)..\Data*.* $(TargetDir)Data\ which fails with error code 1 when I build. I can't help but feel I'm missing the point here though. Any suggestions? Thanks in advance. Graeme

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >