Search Results

Search found 2805 results on 113 pages for 'automated refactoring'.

Page 61/113 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Hibernate - moving annotations from property (method) level to field level

    - by kan
    How do I generate hibernate domain classes from tables with annotations at field level? I used Hibernate Tools project and generated domain classes from the tables in the database. The generated classes have annotations on the getter methods rather than at the field level. Kindly advice a way to generate domain classes that have the fields annotated. Is there any refactoring facility available in eclipse/IDEA etc.. to move the annotations from method level to field level? Appreciate your help and time.

    Read the article

  • What's the best way to change the namespace of a highly referenced class?

    - by vanslly
    I am attempting to move a highly referenced class from one namespace to another. Simply moving the file into the new project which has a different root namespace results in over 1100 errors throughout my solution. Some references to the class involve fully qualified namescape referencing and others involve the importing of the namespace. I have tried using a refactoring tool (Refactor Pro) to rename the namespace, in the hope all references to the class would change, but this resulted in the aforementioned problem. Anyone have ideas of how to tackle this challenge without needing to drill into every file manually and changing the fully qualified namespace or importing the new one if it doesn't exist already? Thanks.

    Read the article

  • What are the best tools for Sql Server version control

    - by Mendy
    After reading this post, and the suggestion to use Team Edition for Database Professionals, I want to know is there any equivalent to this for SQL server 2008 / Visual stuio 2010 ultimate. I'm looking for tool need to do all the thing that Jeff mention in his article: Create test data. Schema comparison. Data comparison. Database unit testing. Refactoring. Integrated T-SQL editor, a first class language construct in the IDE, just like C# and VB.NET.

    Read the article

  • LINQ to SQL repository - caching data

    - by creativeincode
    I have built my first MVC solution and used the repository pattern for retrieving/inserting/updating my database. I am now in the process of refactoring and I've noticed that a lot of (in fact all) the methods within my repository are hitting the database everytime. This seems overkill and what I'd ideally like is to do is 'cache' the main data object e.g. 'GetAllAdverts' from the database and to then query against this cached object for things like 'FindAdvert(id), AddAdvert(), DeleteAdvert() etc..' I'd also need to consider updating/deleting/adding records to this cache object and the database. What is the best apporoach for something like this? My knowledge of this type of things is minimal and really looking for advice/guidance/tutorial to point me in the right direction. Thanks in advance.

    Read the article

  • When to use a module, and when to use a class

    - by Matt Briggs
    I am currently working through the Gregory Brown Ruby Best Practices book. Early on, he is talking about refactoring some functionality from helper methods on a related class, to some methods on module, then had the module extend self. Hadn't seen that before, after a quick google, found out that extend self on a module lets methods defined on the module see each other, which makes sense. Now, my question is when would you do something like this module StyleParser extend self def process(text) ... end def style_tag?(text) ... end end and then refer to it in tests with @parser = Prawn::Document::Text::StyleParser as opposed to just using a class with some class methods on it? is it so that you can use it as a mixin? or are there other reasons I'm not seeing?

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Working Solo On Small Projects: Cowboy Coding The Way To Go?

    - by snicker
    I am a big advocate of agile methods when working on teams and/or large projects. However, I find that for smaller projects, when working solo, I usually start the project writing unit tests, documenting extensively, refactoring. As time wears on, I stop because I feel like I'm wasting time. I find that cowboy coding with an agile spin (testing often, writing human readable code) often works extremely well for me on small, solo projects that I don't expect others to have to work with. Do other people share my sentiment? Or do you think that one should never stick to their guns (get it? cowboys)? So the real question: Are there any agile methodologies that are particularly tailored to a solo project? (other than my "agile cowboy" method above)

    Read the article

  • How to study programming with C language

    - by gurugio
    I am using only C for 5 years. So I am sure that I know C grammer, but I have no idea how to advance programming skills. There are many books for modern languages (such as C++, Java) to study programming skills like the refactoring or pattern, software architecture. But no book is written with C language. The book author say that his/her book is not language-dependent, but I don't think so. How can I advance my programming skills? I have to study modern language and read the books? Are there books about software design or programming skill written with C?

    Read the article

  • Limited options for accessing events in derived classes?

    - by maxp
    Im refactoring a class, and moving sections into a base class. I have a few events similar to public event EventHandler GridBinding; Which are now in the base class, but i am finding i cannot now check to see if the event is null in my derived class. Doing so gives me the error: The event 'xyz.GridBinding' can only appear on the left hand side of += or -= (except when used from within the type 'xyz._MyBaseClass'). Is this correct, am i missing anything, or is there any way to get around this or is writing an accessor the only way to do this? I am using c#/.net 4.0

    Read the article

  • What is more interesting or powerful: Curry/Mercury/Lambda-Prolog/your suggestion.

    - by Bubba88
    Hi! I would like to ask you about what formal system could be more interesting to implement from scratch/reverse engineer. I've looked through some existing and rather open (open in the sense of free/open-source) projects of logical/declarative programming systems. I've decided to make up something similar in my free time, or at least to catch the general idea of implementation. It would be great if some of these systems would provide most of the expressive power and conciseness of modern academic investigations in logic and it's relation with computational models. What would you recommend to study at least at the conceptual level? For example, Lambda-Prolog is interesting particularly because it allows for higher order relations, but AFAIK (I might really be mistaken :)) is based on intuitionist logic and therefore lack the excluded-middle principle; that's generally a disatvantage for me.. I would also welcome any suggestions about modern logical programming systems which are less popular but more expressive/powerful. I guess, this question will need refactoring, but thank you in advance! :)

    Read the article

  • Developer oriented hardware benchmarks?

    - by Promit
    Perhaps I'm looking in the wrong places, but every hardware benchmark I've found, for nearly any component, is oriented towards gamers and/or workstations (video editing etc). Is there anyone doing benchmarks that are relevant to software developers? For example, take SSDs. I don't care how fast Crysis loads off an SSD -- that is completely worthless information. What I want to know is, which drive yields the quickest build times? What about Intellisense and refactoring operations? What RAID configuration has the biggest benefit? I could probably come up with more examples, but you get the point. Long story short, where are the benchmarks that tell me which hardware will be most effective in helping me be a productive software developer?

    Read the article

  • Code understanding, reverse engineering, best concepts and tools. Java.

    - by core07
    One of most demanding tasks for any programmer, architect is understanding other's code. E.g. I am contractor, hired to rescue some project very quickly. Fix bugs, plan global refactoring and therefore I need most efficient way to understand the code. What is the list of concepts, their priority and best tools for this? Of what I know: reverse code engineering to create object models (creating of diagram per package is not so convenient), create sequence diagrams (the tool connects in debug mode to the system and generates diagrams from runtime). Some visualizing techniques, using some tools to work not just with .java but also with e.g. JPA implementors like Hibernate. Generate diagram for not all the codebase, but add some class and then classes used by it. Is Sparx Enterprise Architect state of the art in reverse engineering or far from that. Any other better tools? Ideally would be that tool makes me understand the code as if I wrote it myself :)

    Read the article

  • iPhone and Vertex Buffer Objects

    - by dancer
    I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • How to replace for-loops with a functional statement in C#?

    - by Lernkurve
    A colleague once said that God is killing a kitten every time I write a for-loop. When asked how to avoid for-loops, his answer was to use a functional language. However, if you are stuck with a non-functional language, say C#, what techniques are there to avoid for-loops or to get rid of them by refactoring? With lambda expressions and LINQ perhaps? If so, how? Questions So the question boils down to: Why are for-loops bad? Or, in what context are for-loops to avoid and why? Can you provide C# code examples of how it looks before, i.e. with a loop, and afterwards without a loop?

    Read the article

  • asp.net membership provider api. usability. best-practice

    - by Andrew Florko
    Hello everybody, Membership/Role/Profile providers API appeared in early days of asp.net Nearly everytime I can't live with standard API & have to add some extra functionality (for sorting, retrieving e.t.c.). I also have to use different database structure often (with foreign key to some tables for example) or think about performance improvements. These considerations forced teams I took part in to build own providers but I can't stand to implement providers API (because we don't use 70% of standard functionality at least). Moreover, providers that were built for exact projects were rarely reused. I wonder if someone found swiss-knife early-days-API providers implementation that is usefull for any kind of project without refactoring... Or do you use your own implementations of early-days-API's Or may be you abandon standard architecture and use lightweight implementations ? Thank you in advance

    Read the article

  • How to iterate javascript object properties in the order they were written.

    - by Jenea
    Hi. I identified a bug in my code which I hope to solve with minimal refactoring effort. This bug occurs in Chrome and Opera browsers. Problem: var obj = {23:"AA",12:"BB"}; //iterating through obj's properties for(i in obj) document.write("Key: "+i +" "+"Value: "+obj[i]); Output in FF,IE Key: 23 Value: AA Key: 12 Value: BB Output in Opera and Chrome (Wrong) Key: 12 Value BB Key: 23 Value AA I attempted to make an inverse ordered object like this var obj1={"AA":23,"BB":12}; for(i in obj1) document.write("Key: "+obj[i] +" "+"Value: "+i); However the output is the same. Is there a way to get for all browser the same behaviour with small changes?

    Read the article

  • How to diff two regions of the same file in Eclipse

    - by Thomas Nilsson
    I'm a TDDer and often have a need to refactor out common or similar code. If it is exactly the same there is no big problem, Eclipse can almost always do that by itself. But to get there I'm finding myself often looking at similar, but not identical, code fragments in the same, or even different, files. It would be very handy if there was a possibility to mark two regions and get Eclipse (or some other tool) to mark the differences. With this information it would be much simpler to iteratively move the regions closer until they are the same and then activate the Extract Method refactoring. It can be done in Emacs of course, but I'd like to have this readily available from Eclipse. Any pointers?

    Read the article

  • Moving from WCF RIA RC to Release: best practices?

    - by Duncan Bayne
    I have an existing WCF RIA project built on the Release Candidate; I'm now moving to the Release version & have discovered many changes. David Scruggs made the following comment on his (MSDN) blog: "If you’ve written anything in SIlverlight 4 RIA Services, you’ll need to rewrite it. There has been a lot of refactoring and namespace moves." Having made a brief attempt to compile the old solution with the new RIA framework I'm inclined to agree. My current plan is to: remove the Silverlight Business Application projects from the Solution rebuild the EF4 items from the database create a new Silverlight Business Application project re-add the files (XAML, CS) from the old Silverlight Business Application project Does this sound like a reasonable approach? I think it's cleaner than trying to manually alter the existing project.

    Read the article

  • Processing an n-ary ANTLR AST one child at a time

    - by Chris Lieb
    I currently have a compiler that uses an AST where all children of a code block are on the same level (ie, block.children == {stm1, stm2, stm3, etc...}). I am trying to do liveness analysis on this tree, which means that I need to take the value returned from the processing of stm1 and then pass it to stm2, then take the value returned by stm2 and pass it to stm3, and so on. I do not see a way of executing the child rules in this fashion when the AST is structured this way. Is there a way to allow me to chain the execution of the child grammar items with my given AST, or am I going to have to go through the painful process of refactoring the parser to generate a nested structure and updating the rest of the compiler to work with the new AST? Example ANTLR grammar fragment: block : ^(BLOCK statement*) ; statement : // stuff ; What I hope I don't have to go to: block : ^(BLOCK statementList) ; statementList : ^(StmLst statement statement+) | ^(StmLst statement) ; statement : // stuff ;

    Read the article

  • Servlets, long operations

    - by asrijaal
    Hi there, I'm refactoring a big piece of code atm where a long taking operation is executed in a servlet. Now sometimes I don't get a response after the operation has finished. (It has finished because it is printed into the logs) What I wish to achieve would some "fire and forget" behavior by the servlet. I would pass my params to the action and the servlet would immediately return a status (something like: the operation has started, check your logs for further info) Is this possible with servlet 2.5 spec? I think I could get such a behavior with JMS maybe any other solutions out there?

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • Emacs: Define a function which loads the file where the function itself is defined

    - by damd
    I'm refactoring a bit in my Emacs set up and have come to the conclusion that I want to use a different init file than the default one. So basically, in my ~/.emacs file, I have this: (load "/some/directory/init.el") Up until now, that's been working just fine. However, now I want to redefine an old command that I've used for ages, which opens my init file: (defun conf () "Open a buffer with the user init file." (interactive) (find-file user-init-file)) As you can see, this will open ~/.emacs no matter what I do. I want it to open /some/directory/init.el, or wherever the conf command itself is defined. How would I do that?

    Read the article

  • How do I temporarily change the require path in Ruby ($:)?

    - by John Feminella
    I'm doing some trickery with a bunch of Rake tasks for a complex project, gradually refactoring away some of the complexity in chunks at a time. This has exposed the bizarre web of dependencies left behind by the previous project maintainer. What I'd like to be able to do is to add a specific path in the project to require's list of paths to be searched, aka $:. However, I only want that path to be searched in the context of one particular method. Right now I'm doing something like this: def foo() # Look up old paths, add new special path. paths = $: $: << special_path # Do work ... bar() baz() quux() # Reset. $:.clear $: << paths end def bar() require '...' # If called from within foo(), will also search special_path. ... end This is clearly a monstrous hack. Is there a better way?

    Read the article

  • Renaming files: Visual Studio vs Version control

    - by Benjol
    The problem with renaming files is that if you want to take advantage of Visual Studio refactoring, you really need to do it from inside Visual Studio. But most (not all*) version control system also want to be the ones doing the renaming. One solution is to use integrated source control, but this is not always available, and in some cases is pretty clunky. I'd personally be more comfortable using source control separately, outside of Visual Studio, but I'm not sure how to manage this question of file renames. So, for those of you that use Visual Studio, which source control do you use? Do you use a VS integration (which one?) and otherwise, how do you resolve this renaming problem? (* git is smart enough to work it out for itself)

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >