Search Results

Search found 988 results on 40 pages for 'branching and merging'.

Page 24/40 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Linux LVM snapshot commit or revert?

    - by Shewfig
    Hi, I'm about to perform an experimental upgrade on my CentOS 5 server. If the upgrade fails, I want to be able to back out the changes to the filesystem. This scenario seems similar to the example in Section 3.8 of the LVM HOWTO for LVM2 read-write snapshots - but the example is rather lacking in actual how-to. 1) How would I commit the changes, merging them back into the original partition? 2) How would I revert the changes, restoring the filesystem back to its original state? Should I assume that I'll need to restart several services, if not outright reboot? 3) Is it possible to snapshot only certain directories on a partition, or is it a partition-wide operation? Thanks...

    Read the article

  • We're Subversion Geeks and we want to know the benefits of Mercurial

    - by Matt
    Having read I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS. I have a related follow up question. I read that question and read the recommended links and videos and I see the benefits but I don't see the overall mindshift people are talking about. Our team is of 8-10 developers that work on one large code base consisting of 60 projects. We use Subversion and have a main trunk. When a developer starts a new Fogbugz case they create a svn branch, do the work on the branch and when they're done they merge back to the trunk. Occasionally they may stay on the branch for an extended time and merge the trunk to the branch to pick up the changes. When I watched Linus talk about people creating a branch and never doing it again, that's not us at all. We create probably 50-100 branches a week without issue. The biggest challenge is the merging but we've gotten pretty good at that as well. I tend to merge by fogbugz case & checkin rather than the entire root of the branch. We never work remotely and we never make branches off of branches. If you're the only one working in that section of the code base then the merge to the trunk goes smoothly. If someone else had modified the same section of code then the merge can get messy and you might need to do some surgery. Conflicts are conflicts, I don't see how any system could get it right most of the time unless if was smart enough to understand the code. After creating a branch the following checkout of 60k+ files takes some time but that would be an issue with any source control system we'd use. Is there some benefit of any DVCS that we're not seeing that would be of great help to us?

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • How to mail merge a hyperlink in Microsoft Word or Publisher 2010

    - by hjoelr
    I am trying to do an e-mail merge in Microsoft Publisher 2010 (which appears to do mail merging like Microsoft Word) and I'm wanting a merged email address to automatically be hyperlinked in the resulting email. For example, one of the merge fields could be "EmailAddress" with an example address being [email protected]. In the document, I would want the merge field "EmailAddress" to display as the default text in an hyperlink and also set the target of the hyperlink to "mailto:EmailAddress" (eg. mailto:[email protected]). I can't figure out how to get Publisher 2010 to do that. I would think that it's possible, though. Any help or pointers would be greatly appreciated!

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • Single Exchange 2007 server - two AD domains

    - by TheCleaner
    CURRENT: single domain, single Exchange 2007 NEW: two domains, single Exchange 2007 Can this be done? Details: Current setup is a single W2k3 domain with a single Exchange 2007 server. We are merging with another company that currently hosts their email with their ISP via POP3. We'd like to start hosting their email on our Exchange server using their existing domain SMTP addresses. They don't have an AD domain at all at the moment. Recommendations? Can I do this with a trust between the 2 domains? Requirements: They can't have multiple SMTP addresses on both domains...such as I've seen with articles pointing to "hosting multiple domains". I want companyA to have the same account settings they've always had...companyB to have the same SMTP address they've had and not an additional one on the current companyA Exchange domain. They should be able to collaborate (calendar, contacts, GALs) but should still be distinguishable based on which company they "work for". Please help...thanks!

    Read the article

  • Two different domains as one user session in Google Analytics

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • iSCSI RAID1 on two servers, fail scenario

    - by Franz Kafka
    Hallo, a simple question: Image I have two servers, each server has two disks in RAID1. Now I merge the two arrays with iSCSI to one RAID1 disk. Two questions: Can I do the merging of the 4 disks in one go? I can't image how. First I will have to install the os, and then the raid controller is already set up to RAID1. If a whole server fails the other server would continue working without any problems? Does iSCSI notice that the other server is missing and treet this as if the two disks were broken? When the server comes back online the data is resynced, as if I installed new disks into a array? Can I image that this way? Thanks alot.

    Read the article

  • GLSL billboard move center of rotation

    - by Jacob Kofoed
    I have successfully set up a billboard shader that works, it can take in a quad and rotate it so it always points toward the screen. I am using this vertex-shader: void main(){ vec4 tmpPos = (MVP * bufferMatrix * vec4(0.0, 0.0, 0.0, 1.0)) + (MV * vec4( vertexPosition.x * 1.0 * bufferMatrix[0][0], vertexPosition.y * 1.0 * bufferMatrix[1][1], vertexPosition.z * 1.0 * bufferMatrix[2][2], 0.0) ); UV = UVOffset + vertexUV * UVScale; gl_Position = tmpPos; BufferMatrix is the model-matrix, it is an attribute to support Instance-drawing. The problem is best explained through pictures: This is the start position of the camera: And this is the position, looking in from 45 degree to the right: Obviously, as each character is it's own quad, the shader rotates each one around their own center towards the camera. What I in fact want is for them to rotate around a shared center, how would I do this? What I have been trying to do this far is: mat4 translation = mat4(1.0); translation = glm::translate(translation, vec3(pos)*1.f * 2.f); translation = glm::scale(translation, vec3(scale, 1.f)); translation = glm::translate(translation, vec3(anchorPoint - pos) / vec3(scale, 1.f)); Where the translation is the bufferMatrix sent to the shader. What I am trying to do is offset the center, but this might not be possible with a single matrix..? I am interested in a solution that doesn't require CPU calculations each frame, but rather set it up once and then let the shader do the billboard rotation. I realize there's many different solutions, like merging all the quads together, but I would first like to know if the approach with offsetting the center is possible. If it all seems a bit confusing, it's because I'm a little confused myself.

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • Colour table cells in Microsoft Word after mail merge

    - by James
    I have an Excel spreadsheet of student data. For each of 30 topics, students are traffic lighted R, A or G (for red, amber, green) in the spreadsheet. I am mail merging individual result print-outs in Word 2010. However, rather than printing the letter R/A/G next to each topic, I would rather change the background colour of the cell to that colour. How can I do this? Is there an option with merge fields or can it be done with a macro (please provide sample code if so - I don't have experience with macros!)

    Read the article

  • Using "mv" or "ditto" to merge folders in OS X

    - by Sandro Dzneladze
    Used to the Windows way of doing I just found out OS X has no merge function – moving means replacing the folder. While this does make sense, I miss merging! I've two Wordpress directories, 1 contains default source and 2 contains worked version with plugins custom theme etc. I want to see difference between this two, so I'm putting it on SVN. Folder 1 is already up, now in theory I should simply merge contents of 2 with 1 by replacing everything with contents of 2 but leaving hidden SVN files untouched. Unfortunately OS X, when moving, replaces the folder so that my SVN client goes crazy and doesn't understand folder structure anymore. So, I believe my options are mv and ditto, but which one would you use in my situation and how? sudo mv wordpress /Documents/svn/wwwholiday/trunk/wordpress I want mv to overwrite everything it finds, but leave alone whatever is already inside folder 1 and has no duplicate in folder 2.

    Read the article

  • Steps to send patch to Launchpad

    - by Alois Mahdal
    With a Git/Github background and knowing very little about Bazaar VCS, I would like to occasionally report a bug to Launchpad and even send a patch. I'd like to do it in a "proper" way so that it's ready for merging or improvement while not getting in way. I can't seem to find a decent simple How-to suited for my needs. So what I did so far: I have created a Launchpad account, reported the bug, installed Bazaar and setup SSH keys etc. Now if it was Github, I'd fork the repo, clone the forked repo, create a sanely named branch and do the work, commit + push, create a pull request using Github WUI. But it's not Github, and both LP and Bazaar architectures seem quite different from their Github/Git cunterparts. So could a kind soul save me from drowning in tons of documents and complile a straightforward step path, mainly the second part? Possibly including relevant CLI commands when they are needed? Edit: It seems that I should clarify if I'm asking specifically about Ubuntu packages (whatever it means) or Launchpad packages. I don't really care much about distinction between Ubuntu packages and non-Ubuntu packages. Any software could be in Ubuntu today and out of it tomorrow, or vice-versa. The development is what matters much more than distribution. Ao I was assuming that not every single package distributed in Ubuntu is hosted on Launchpad, an "official" or "default" workflow for Launchpad exists (well if all devs can agree on using Bazaar, why couldn't most of them agree on a patching workflow?), so I'm asking about the Launchpad way, not the Ubuntu way. And I chose AU because since the intersection is vast, I guess it's pretty on topic here.

    Read the article

  • How to change internal page numbers in the meta data of a PDF?

    - by YGA
    I have a pdf document I created through non-Acrobat means (printing to pdf, then merging a bunch of pdfs), but I'd like to manually change the page numbers (i.e. the first several pages are simply title pages, the page that is labeled "page 1" is really the 7th sheet of the pdf). What's the simplest (and ideally, free) way to do this? To be clear, I am not trying to change the numbers on the pages themselves, but the page numbers in the "metadata" that the pdf stores (the pages themselves are already numbered correctly; I just want "go to page 1" to go to the page labeled 1, which could be sheet 7). For what it's worth, I'm on Windows, though I have access to Macs as well.

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • Why not commit unresolved changes?

    - by Explosion Pills
    In a traditional VCS, I can understand why you would not commit unresolved files because you could break the build. However, I don't understand why you shouldn't commit unresolved files in a DVCS (some of them will actually prevent you from committing the files). Instead, I think that your repository should be locked from pushing and pulling, but not committing. Being able to commit during the merging process has several advantages (as I see it): The actual merge changes are in history. If the merge was very large, you could make periodic commits. If you made a mistake, it would be much easier to roll back (without having to redo the entire merge). The files could remain flagged as unresolved until they were marked as resolved. This would prevent pushing/pulling. You could also potentially have a set of changesets act as the merge instead of just a single one. This would allow you to still use tools such as git rerere. So why is committing with unresolved files frowned upon/prevented? Is there any reason other than tradition?

    Read the article

  • How to change internal page numbers in the meta data of a PDF?

    - by YGA
    I have a pdf document I created through non-Acrobat means (printing to pdf, then merging a bunch of pdfs), but I'd like to manually change the page numbers (i.e. the first several pages are simply title pages, the page that is labeled "page 1" is really the 7th sheet of the pdf). What's the simplest (and ideally, free) way to do this? To be clear, I am not trying to change the numbers on the pages themselves, but the page numbers in the "metadata" that the pdf stores (the pages themselves are already numbered correctly; I just want "go to page 1" to go to the page labeled 1, which could be sheet 7). For what it's worth, I'm on Windows, though I have access to Macs as well.

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • .htaccess redirect not preserving http_referer header

    - by CodeToaster
    We're merging with another company, and we want to redirect content from their (Apache) website to our (IIS) site. When traffic arrives at our site, we inspect the HTTP_REFERER, and if the visitor was just redirected from the company's site that we just merged with, they'll be presented with a "splash" page announcing the merger. I've added the line... Redirect / http://www.oursite.com/ ...to their .htaccess, which works fine, except that when the browser is redirected it doesn't send the HTTP_REFERER header. I've tried redirecting with redirect codes 301, 302 and 307 (the default, I believe, is 302) and all have the same effect (redirects fine, but no HTTP_REFERER). Can anyone provide some insight into why HTTP_REFERER wouldn't be included?

    Read the article

  • Do you think we will ever settle on a "standard" platform? [closed]

    - by GazTheDestroyer
    The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas. What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. "Write Once Run Anywhere" seemed to finally be becoming a reality. However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous "app". A good site is no longer enough, you need a native "app", and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy. It's kind of weird how the cycle of popularity goes. Mainframes with terminals - thin client. PC - thick client. Web browser - thin client. Phone app - thick(ish) client. I just wonder if you think there will ever be a global standard for clients, or whether the "shiny and different" cycle will always continue along with the battle of the tech du jour.

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • Enable [command] key to register as something other than just [ctrl]?

    - by gojomo
    I'm running 10.04LTS inside VMWare Fusion on a Mac. The [command] key (aka [windows] on many keyboards) is almost always behaving as if it was [ctrl], even though I done anything explicit to request that behavior. In fact, in SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior, 'default' is chosen (rather than the 'Control is mapped to Win keys' option). However, choosing other options there do not seem to change the handling of [command], at least not as tested in the SystemPreferenceKeyboard Shortcuts app. (No matter what I've tried, [command]-x is always detected as [Ctrl]-x in that app.) I've tried: various options under SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior toggling the VMWare Fusion Preferences KKeyboard & Mouse Key Mappings setup which claims to map '[command]' to '[windows]', and restarting the VM in each position the xmodmap lines suggested at https://help.ubuntu.com/community/MappingWindowsKey And yet, it's clear that all Ubuntu apps aren't merging [ctrl] and [command], because in 'Terminal', [shift]-[ctrl]-c will Copy, but [shift]-[command]-c will not. If the [command]/[windows] key was recognized as anything else ('Super', 'Meta', 'Hyper'? I don't care as long as it's not 'Control'), then I could achieve my real goal (which happens to be enabling CMD-based cut/copy/paste in PyCharm, while leaving CTRL-X/etc available for emacs-like bindings). I think any solution which manages to make [command]-x appear as something other than [ctrl]-x in PreferencesKeyboard Shortcuts will probably do the trick.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >