Search Results

Search found 42218 results on 1689 pages for 'os version'.

Page 44/1689 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Trade-offs of local vs remote development workflows for a web development team

    - by lamp_scaler
    We currently have SVN setup on a remote development server. Developers SSH into the server and develops on their sandbox environment on the server. Each one has a virtual host pointed to their sandbox so they can preview their changes via the web browser by connecting to developer-sandbox1.domain.com. This has worked well so far because the team is small and everyone uses computers with varying specs and OSs. I've heard some web shops are using a workflow that has the developers work off of a VM on their local machine and then finally push changes to the remote server that hosts SVN. The downside to this is that everyone will need to make sure their machine is powerful enough to run both the VM and all their development tools. This would also mean creating images that mirror the server environment (we use CentOS) and have them install it into their VMs. And this would mean creating new images every time there is an update to the server environment. What are some other trade-offs? Ultimately, why did you choose one workflow over the other?

    Read the article

  • Are there any subversion "dash board" web applications that can show me a list of recent commits from all my repositories?

    - by Joe
    I am looking for something like a subversion dashboard that at the very least can show me commits from across a group of repositories. Is there anything like this available? Since it could just as well be dead simple and I can't find anything immediately I am thinking if just scratching my own itch here, but I am hoping someone has wanted this before? Are there any subversion "dashboards" that an show me even a simple twitter-like list of commits from across my repositories?

    Read the article

  • git tagging comments - best practices

    - by Evan
    I've adopted a tagging system of x.x.x.x, and this works fine. However, you also need to leave a comment with your git tag. I've been using descriptions such as "fixes bug Y" or "feature X", but is this the best sort of comment to be leaving? Particularly, what if a tag encompasses several fixes, it seems not to make sense to have a very long tag comment. Does this mean that I should be creating a tag for every bug fix or feature, or should the tag comments be reflective of something else? I have a few ideas that may be good, but I'd love some advice from seasoned git tagging veterans :) For those who prefer specific examples: 1.0.0.0 - initial release 1.0.0.1 - bug fix for issue X 1.0.0.2 - (what if this is a bug fix for multiple issues, the comment would be too long, no?) Another example, in this example, the comments are more or less the same as the tags, it seems redundant. Is there something else we could be describing? https://github.com/osCommerce/oscommerce2/tags

    Read the article

  • Is there tool agnostic terminology for source control activities?

    - by C. Ross
    My team is entering into some discussions on source control (process and possibly tools) and we would like a tool agnostic terminology for the various activities. The environment does have multiple (old) VCS's, and multiple desired (new) VCS's. Is there a standard definition of activities, or at least some commonly accepted set? Example activities (in CVS terminology): Branch Check out Update Merge

    Read the article

  • Do you need all that data?

    - by BuckWoody
    I read an amazing post over on ars technica (link: http://arstechnica.com/science/news/2010/03/the-software-brains-behind-the-particle-colliders.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss) abvout the LHC, or as they are also known, the "particle colliders". Beyond just the pure scientific geek awesomeness, these instruments have the potential to collect more data than you can (or possibly should) store. Actually, this problem has a lot in common with a BI system. There's so much granular detail available in the source systems that a designer has to decide how, and how much, to roll up the data. Whenver you do that, you lose fidelity, but in many cases that's OK. Take, for example, your car's speedometer. You don't actually need to track each and every point of speed as it happens. You only need to know that you're hovering around the speed limit at a certain point in time. Since this is the way that humans percieve data, is there some lesson we should take in the design of data "flows" - and what implications does this have for new technologies like StreamInsight? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • The Windows Azure Software Development Kit (SDK) and the Windows Azure Training Kit (WATK)

    - by BuckWoody
    Windows Azure is a platform that allows you to write software, run software, or use software that we've already written. We provide lots of resources to help you do that - many can be found right here in this blog series. There are two primary resources you can use, and it's important to understand what they are and what they do. The Windows Azure Software Development Kit (SDK) Actually, this isn't one resource. We have SDK's for multiple development environments, such as Visual Studio and also Eclipse, along with SDK's for iOS, Android and other environments. Windows Azure is a "back end", so almost any technology or front end system can use it to solve a problem. The SDK's are primarily for development. In the case of Visual Studio, you'll get a runtime environment for Windows Azure which allows you to develop, test and even run code all locally - you do not have to be connected to Windows Azure at all, until you're ready to deploy. You'll also get a few samples and codeblocks, along with all of the libraries you need to code with Windows Azure in .NET, PHP, Ruby, Java and more. The SDK is updated frequently, so check this location to find the latest for your environment and language - just click the bar that corresponds to what you want: http://www.windowsazure.com/en-us/develop/downloads/ The Windows Azure Training Kit (WATK) Whether you're writing code, using Windows Azure Virtual Machines (VM's) or working with Hadoop, you can use the WATK to get examples, code, PowerShell scripts, PowerPoint decks, training videos and much more. This should be your second download after the SDK. This is all of the training you need to get started, and even beyond. The WATK is updated frequently - and you can find the latest one here: http://www.windowsazure.com/en-us/develop/net/other-resources/training-kit/     There are many other resources - again, check the http://windowsazure.com site, the community newsletter (which introduces the latest features), and my blog for more.

    Read the article

  • Examples of continuous integration workflow using git

    - by Andrew Barinov
    Can anyone provide a rough outline of their git workflow that complies with continuous integration. E.g. How do you branch? Do you fast forward commits to the master branch? I am primarily working with Rails as well as client and server side Javascript. If anyone can recommend a solid CI technology that's compatible with those, that'd be great. I've looked into Jenkins but would like to check out other good alternatives. To put some context into this, I am planning on transitioning from working as a single developer into working as part of the team. I'd like to start standardizing my own personal workflow so that I can onboard new devs quickly.

    Read the article

  • How can I merge two SubVersion branches to one working copy without committing?

    - by Eric Belair
    My current SubVersion workflow is like so: The trunk is used to make small content changes and bug fixes to the main source code. Branches are used for adding/editing enhancements and projects. So, trunk changes are made, tested, committed and deployed pretty quickly. Whereas, enhancements and projects need additional user testing and approval. At time, I have two branches that need testing and approval at the same time. I don't want to merge to the trunk and commit until the changes are fully tested and approved. What I need to do is merge both branches to one working copy without any commits. I am using Tortoise SVN, and when I try to merge the second branch, I get an error message: Cannot merge into a working copy that has local modifications Is there a way that I can do this without committing either merge?

    Read the article

  • How should I manage "reverting" a branch done with bookmarks in mercurial?

    - by Earlz
    I have an open source project on bitbucket. Recently, I've been working on an experimental branch which I (for whatever reason) didn't make an actual branch for. Instead what I did was use bookmarks. So I made two bookmarks at the same revision test --the new code I worked on that should now be abandoned(due to an experiment failure) main -- the stable old code that works I worked in test. I also pushed from test to my server, which ended up switching the tip tag to the new unstable code, when I really would've rather it stayed at main. I "switched" back to the main bookmark by doing a hg update main and then committing an insignificant change. So, I pushed this with hg push -f and now my source control is "correct" on the server. I know that there should be a cleaner way to "switch" branches. What should I do in the future for this kind of operation?

    Read the article

  • Database source control

    - by Bojan Skrchevski
    Should database files(scripts etc.) be on source control? If so, what is the best method to keep it and update it there? Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up. What approach is best used for databases on source-control?

    Read the article

  • TortoiseSVN post-commit.bat doesn't work [migrated]

    - by user565739
    I am using TortoiseSVN on Windows 7 x64. I tried to put a post-commit.bat in the hooks folder of a repository, but it doesn't work at all. So I tried to put a pre-commit.bat (the content is exact the same as post-commit.bat) in hooks, and it worked fine. This is very strange. The .bat file is very simple, I just tried with: @echo off setlocal set REPOS=%1 set TXN=%2 xcopy C:\a C:\b\ /S /F exit 0 Anyone makes post-commit work with TortoiseSVN?

    Read the article

  • Are there advantages to using a DVCS for a solo developer?

    - by SnOrfus
    Right now, I use visual svn on my server, and have ankhsvn/tortoise on my personal machine. It works fine enough, and I don't have to change, but if I can see some benefits of using a DVCS, then I might give it a go. However, if there's no point or difference using it without other people, then I won't bother. So again, I ask, are there any benefits to using a DVCS when you're the only developer?

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Managing Eclipse projects in source control

    - by Matt Phillips
    I've been using eclipse for a long time to do development. One of the problems I've come across when working on other people's projects is if they come from source control, some of the eclipse project files "default.properties" and other xml config files are missing. Its usually a big pain in the butt to get the project running in eclipse. I understand the reasoning to not have certain files tracked because they may be full of specific stuff to a certain eclipse install. How do all of you manage that?

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • When should I make the first commit to source control?

    - by Kendall Frey
    I'm never sure when a project is far enough along to first commit to source control. I tend to put off committing until the project is 'framework-complete' and primarily commit features from then on. (I haven't done any personal projects large enough to have a core framework too big for this.) I have a feeling this isn't best practice, though I'm not sure what all could go wrong. Let's say, for example, I have a project which consists of a single code file. It will take about 10 lines of boilerplate code, and 100 lines to get the project working with extremely basic functionality (1 or 2 features). Should I first check in: The empty file? The boilerplate code? The first features? At some other point? Also, what are the reasons to check in at a specific point?

    Read the article

  • Enable DreamScene in Any Version of Vista or Windows 7

    - by DigitalGeekery
    Windows DreamScene was a utility available for Vista Ultimate that allowed users to set video as desktop wallpaper. It was dropped in Windows 7, but we’ll take a look at how to play DreamScenes in all versions of Windows 7 or Vista. Downloading DreamScenes First, you’ll need to find some DreamScenes to download. We’ve found some nice ones at both DreamScene.org and DeviantArt. You can find those download links at the end of the article. They’ll come as compressed files, so you’ll need to extract them after downloading. Windows 7 DreamScene Activator If you are running Windows 7 you can use Windows 7 DreamScene Activator. This free portable utility enables DreamScene in both 32 & 64 bit versions of Windows 7. Users can then set either MPG or WMV files as desktop wallpaper. Download and extract the Windows 7 DreamScene Activator (link below). Once extracted, you’ll need to run the application as administrator. Right-click on the .exe and select Run as administrator. Click on Enable DreamScene. This will also restart Windows Explorer if it is open. To play your DreamScene, browse for the file in Windows Explorer, right-click the file and select Set as Desktop Background. Enjoy your new Windows 7 DreamScene.   Although it says it is for Windows 7 only, we were able to get it to work with no problems on Vista Home Premium x32 as well.   You can Pause the DreamScene at anytime by right-clicking on the desktop and selecting Pause DreamScene.   When you are ready for a change, click Disable DreamScene and switch back to your previous wallpaper. Using VLC Media Player Users of all versions of Windows 7 & Vista can enable a DreamScene using VLC. Recently, we showed you how to set a video as your desktop wallpaper in VLC.  Since DreamScenes are in MPEG or WMV format, we will use the same tactic to display them as desktop wallpaper. We’ll just need to make a few additional tweaks to the VLC settings. You’ll need to download and install VLC media player if you don’t already have it. You can find the download link below. Next, select Tools > Preferences from the Menu. Select the Video button on the left and then choose DirectX video output from the Output dropdown list. Next, select All under Show Settings at the lower left, then select the Video button on the left pane. Uncheck Show media title on video. This will prevent VLC from constantly showing the title of the video on the screen each time the video loops. Click Save and the restart VLC.   Now we will add the video to our playlist and set it to continuously loop. Select View > Playlist from the Menu. Select the Add file button from the bottom of the Playlist window and select Add file.   Browse for your file and click Open.   Click the Loop button at the bottom so the video plays in a continuous loop.   Now, we’re ready to play the video. After the video starts playing, select Video > DirectX Wallpaper from the Menu, then minimize VLC.   If you’re using Aero Themes, you may get a pop-up warning and Windows will switch automatically to a basic theme.   If looping one video gets to be a little repetitive, you can add multiple videos to your playlist in VLC and loop the entire playlist. Just make sure you toggle the Loop button on the playlist window to Loop All. Now you’ve got a nice DreamScene playing on your desktop. Another cool trick you can do with VLC is take snapshots of favorite movie scenes and set them as backgrounds. When you’re ready to go back to your old wallpaper, maximize VLC, select Video and click DirectX Wallpaper again to turn it off the video background. Occasionally we were left with a black screen and had to manually change our wallpaper back to normal even after turning off the DirectX Wallpaper. Note: Keep in mind that using the VLC method takes up a lot of resources so if you try to run it on older hardware, or say a netbook, you’re not going to get good results. We also tried to use the VLC method in XP, but couldn’t get it to work. If you have leave a comment and let us know. While the DreamScene feature never really caught on in Vista, we find them to be a cool way to pump a little life into your desktop on any version of Vista or Windows 7. Downloads DreamScenes from Dreamscene.org DreamScenes from DeviantArt Download VLC media player Windows 7 DreamScene Activator Similar Articles Productive Geek Tips Wait, How do I Turn on DreamScene Again?Enable Run Command on Windows 7 or Vista Start MenuEnable or Disable UAC From the Windows 7 / Vista Command LineUnderstanding Windows Vista Aero Glass RequirementsEnable Mapping to \HostnameC$ Share on Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • How and where do you store your private work/sourcecode?

    - by Amir Rezaei
    I have worked as a developer for over 10 years now. During that time I have had my own small projects where I have developed tools, applications and games. I have not found any robust solution to store my work. It’s always fun to get back to your code and see how you did before and how you would do it now. It’s just work that is unfortunate to lose. There are SVN solution such as Google’s Project Hosting. However I’m not interested in sharing my code or making it open source. Currently I’m hosting my own SVN server. So here comes my question: How and where do you store your private work/sourcecode? Requirements: Sourcecode versioning Backup Prefers free Edit: Remote access Edit: I have used Dropbox + TrueCrypt + SVN. Unfortunately you are limited to 5gb.

    Read the article

  • Tracing Silex from PHP to the OS with DTrace

    - by cj
    In this blog post I show the full stack tracing of Brendan Gregg's php_syscolors.d script in the DTrace Toolkit. The Toolkit contains a dozen very useful PHP DTrace scripts and many more scripts for other languages and the OS. For this example, I'll trace the PHP micro framework Silex, which was the topic of the second of two talks by Dustin Whittle at a recent SF PHP Meetup. His slides are at Silex: From Micro to Full Stack. Installing DTrace and PHP The php_syscolors.d script uses some static PHP probes and some kernel probes. For Oracle Linux I discussed installing DTrace and PHP in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages. On other platforms with DTrace support, follow your standard procedures to enable DTrace and load the correct providers. The sdt and systrace providers are required in addition to fasttrap. On Oracle Linux, I loaded the DTrace modules like: # modprobe fasttrap # modprobe sdt # modprobe systrace # chmod 666 /dev/dtrace/helper Installing the DTrace Toolkit I download DTraceToolkit-0.99.tar.gz and extracted it: $ tar -zxf DTraceToolkit-0.99.tar.gz The PHP scripts are in the Php directory and examples in the Examples directory. Installing Silex I downloaded the "fat" Silex .tgz file from the download page and extracted it: $ tar -zxf silex_fat.tgz I changed the demonstration silex/web/index.php so I could use the PHP development web server: <?php // web/index.php $filename = __DIR__.preg_replace('#(\?.*)$#', '', $_SERVER['REQUEST_URI']); if (php_sapi_name() === 'cli-server' && is_file($filename)) { return false; } require_once __DIR__.'/../vendor/autoload.php'; $app = new Silex\Application(); //$app['debug'] = true; $app->get('/hello', function() { return 'Hello!'; }); $app->run(); ?> Running DTrace The php_syscolors.d script uses the -Z option to dtrace, so it can be started before PHP, i.e. when there are zero of the requested probes available to be traced. I ran DTrace like: # cd DTraceToolkit-0.99/Php # ./php_syscolors.d Next, I started the PHP developer web server in a second terminal: $ cd silex $ php -S localhost:8080 -t web web/index.php At this point, the web server is idle, waiting for requests. DTrace is idle, waiting for the probes in php_syscolors.d to be fired, at which time the action associated with each probe will run. I then loaded the demonstration page in a browser: http://localhost:8080/hello When the request was fulfilled and the simple output of "Hello" was displayed, I ^C'd php and dtrace in their terminals to stop them. DTrace output over a thousand lines long had been generated. Here is one snippet from when run() was invoked: C PID/TID DELTA(us) FILE:LINE TYPE -- NAME ... 1 4765/4765 21 Application.php:487 func -> run 1 4765/4765 29 ClassLoader.php:182 func -> loadClass 1 4765/4765 17 ClassLoader.php:198 func -> findFile 1 4765/4765 31 ":- syscall -> access 1 4765/4765 26 ":- syscall <- access 1 4765/4765 16 ClassLoader.php:198 func <- findFile 1 4765/4765 25 ":- syscall -> newlstat 1 4765/4765 15 ":- syscall <- newlstat 1 4765/4765 13 ":- syscall -> newlstat 1 4765/4765 13 ":- syscall <- newlstat 1 4765/4765 22 ":- syscall -> newlstat 1 4765/4765 14 ":- syscall <- newlstat 1 4765/4765 15 ":- syscall -> newlstat 1 4765/4765 60 ":- syscall <- newlstat 1 4765/4765 13 ":- syscall -> newlstat 1 4765/4765 13 ":- syscall <- newlstat 1 4765/4765 20 ":- syscall -> open 1 4765/4765 16 ":- syscall <- open 1 4765/4765 26 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 17 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 12 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 20 ":- syscall -> mmap 1 4765/4765 14 ":- syscall <- mmap 1 4765/4765 3201 ":- syscall -> mmap 1 4765/4765 27 ":- syscall <- mmap 1 4765/4765 1233 ":- syscall -> munmap 1 4765/4765 53 ":- syscall <- munmap 1 4765/4765 15 ":- syscall -> close 1 4765/4765 13 ":- syscall <- close 1 4765/4765 34 Request.php:32 func -> main 1 4765/4765 22 Request.php:32 func <- main 1 4765/4765 31 ClassLoader.php:182 func <- loadClass 1 4765/4765 33 Request.php:249 func -> createFromGlobals 1 4765/4765 29 Request.php:198 func -> __construct 1 4765/4765 24 Request.php:218 func -> initialize 1 4765/4765 26 ClassLoader.php:182 func -> loadClass 1 4765/4765 89 ClassLoader.php:198 func -> findFile 1 4765/4765 43 ":- syscall -> access ... The output shows PHP functions being called and returning (and where they are located) and which system calls the PHP functions in turn invoked. The time each line took from the previous one is displayed in the third column. The first column is the CPU number. In this example, the process was always on CPU 1 so the output is naturally ordered without requiring post-processing, or the D script requiring to be modified to display a time stamp. On a terminal, the output of php_syscolors.d is color-coded according to whether each function is a PHP or system one, hence the file name. Summary With one tool, I was able to trace the interaction of a user application with the operating system. I was able to do this to an application running "live" in a web context. The DTrace Toolkit provides a very handy repository of DTrace information. Even though the PHP scripts were created in the time frame of the original PHP DTrace PECL extension, which only had PHP function entry and return probes, the scripts provide core examples for custom investigation and resolution scripts. You can easily adapt the ideas and and create scripts using the other PHP static probes, which are listed in the PHP Manual. Because DTrace is "always on", you can take advantage of it to resolve development questions or fix production situations.

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • Should my colleagues review each others code from source control system?

    - by Daniel Excinsky
    Hi everybody. So that's my story: one of my colleagues uses to review all the code, hosted to revision system. I'm not speaking about adequate review of changes in parts that he belongs to. He watches the code file to file, line to line. Every new file and every modified. I feel just like being spied on! My guess is that if code was already hosted to control system, you should trust it as workable at least. My question is, maybe I'm just too paranoiac and practice of reviewing each others code is good? P.S: We're team of only three developers, and I fear that if there will be more of us, colleague just won't have time to review all the the code we'll write.

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • how to integrate plastic scm with jira? [closed]

    - by bilal fazlani
    I am trying to migrate from VSS to Plastic SCM and want to use it with JIRA. I have reached this far. http://i.stack.imgur.com/h1wSw.png I tried referring to their help documentation. but that did not help. Does someone know how to link a new branch to an issue in JIRA ? I tried to giving same name to Issue and Branch. That din't work. If the Issue key is : "DEMO-7", what should be the "Branch Prefix" & "Branch Name" in Plastic SCM ? I am sure I am missing something.

    Read the article

  • When to delete a branch in Git

    - by Jo-Herman Haugholt
    I have a script project I've been managing with Git. Besides two main branches, several minor branches have been introduced over time to cover minor features, tweaks or temporary changes. Some of these branches are nearing end-of-life, and I won't be updating them any more. What's the different philosophies for handling branches like this? Should they be removed, or left in the repository unmaintained? If I do, won't I end up with a cluttered repository?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >