Search Results

Search found 11114 results on 445 pages for 'dynamic websites'.

Page 55/445 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • Need to make animation whereby the character shatters into a bunch of pieces

    - by theprojectabot
    I would like to take a 3d character model, cut out a bunch of shapes (or a bunch of triangles in the shape of the pieces I want) and then have the pieces separate from each other at the beginning of the animation and fall apart with gravity so it looks like the model is falling apart in shattered pieces. Is there a way to run a script on a mesh, cut out these pieces, instantiate all of them as separate models and then run gravity on them during the simulation?

    Read the article

  • SQL Strings vs. Conditional SQL Statements

    - by Yatrix
    Is there an advantage to piecemealing sql strings together vs conditional sql statements in SQL Server itself? I have only about 10 months of SQL experience, so I could be speaking out of pure ignorance here. Where I work, I see people building entire queries in strings and concatenating strings together depending on conditions. For example: Set @sql = 'Select column1, column2 from Table 1 ' If SomeCondtion @sql = @sql + 'where column3 = ' + @param1 else @sql = @sql + 'where column4 = ' + @param2 That's a real simple example, but what I'm seeing here is multiple joins and huge queries built from strings and then executed. Some of them even write out what's basically a function to execute, including Declare statements, variables, etc. Is there an advantage to doing it this way when you could do it with just conditions in the sql itself? To me, it seems a lot harder to debug, change and even write vs adding cases, if-elses or additional where parameters to branch the query.

    Read the article

  • Dynamically vs Statically typed languages studies

    - by Winston Ewert
    Do there exist studies done on the effectiveness of statically vs dynamically typed languages? In particular: Measurements of programmer productivity Defect Rate Also including the effects of whether or not unit testing is employed. I've seen lots of discussion of the merits of either side but I'm wondering whether anyone has done a study on it. Edit Sadly, only one of the papers shown is actually a study and it does nothing but conclude that the language matters. This leads me to ponder: what if I proposed doing such a study with volunteers from this site?

    Read the article

  • Dynamic DNS Updates with Wireless and Wired interfaces

    - by Phaedrus
    We have offices full of Windows & Mac users who obtain IP addresses from a Windows DHCP server, which in turn updates Dynamic DNS entries. We are noticing major inconsistencies with the entries, and have found that the problem is occurring more on Macs than on windows, and even more when users are frequently switching from wired to wireless adapter, which makes sense, as this sequence occurs: User enables wired adapter and registers Proper DNS User enables wireless adapter and registers 2nd proper DNS entry user switches off wireless manually and 2nd entry remains improperly until scavenge. Our help desk folks rely heavily (maybe more than they should) on the dynamic entries as part of their business process. For example, the user submits a help desk ticket, and the staff member expects to be able to remote desktop to their machine by hostname, which is hyperlinked in the helpdesk ticketing app. We have implemented multiple solutions & band-aids to different symptoms of the problems such as: Using DNS Reservations for Macintosh PCs Using DNS Scavenging to remove old records Switching from a Cisco DHCP server to the Windows DHCP Server But no matter what we do, it seems impossible to maintain perfect records. Has anyone encountered this problem before? What is industry best practice? Comments & Suggestions are much appreciated, /P

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • does unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. now I'm wondering does it have a noticeable affect in fps if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? edit final output will be a w*h cell grid, but for technical issues it's much more easier for me to allocate (w+1)*(h+1) vertices. sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect fps or not? (note that mesh is only generated once in each time you play the game)

    Read the article

  • High Profile ASP.NET websites

    - by nandos
    About twice a month I get asked to justify the reason "Why are we using ASP.NET and not PHP or Java, or buzz-word-of-the-month-here, etc". 100% of the time the questions come from people that do not understand anything about technology. People that would not know the difference between FTP and HTTP. The best approach I found (so far) to justify it to people without getting into technical details is to just say "XXX website uses it". Which I get back "Oh...I did not know that, so ASP.NET must be good". I know, I know, it hurts. But it works. So, without getting into the merit of why I'm using ASP.NET (which could trigger an endless argument for other platforms), I'm trying to compile a list of high profile websites that are implemented in ASP.NET. (No, they would have no idea what StackOverflow is). Can you name a high-profile website implemented in ASP.NET? EDIT: Current list (thanks for all the responses): (trying to avoid tech sites and prioritizing retail sites) Costco - http://www.costco.com/ Crate & Barrel - http://www.crateandbarrel.com/ Home Shopping Network - http://www.hsn.com/ Buy.com - http://www.buy.com/ Dell - http://www.dell.com Nasdaq - http://www.nasdaq.com/ Virgin - http://www.virgin.com/ 7-Eleven - http://www.7-eleven.com/ Carnival Cruise Lines - http://www.carnival.com/ L'Oreal - http://www.loreal.com/ The White House - http://www.whitehouse.gov/ Remax - http://www.remax.com/ Monster Jobs - http://www.monster.com/ USA Today - http://www.usatoday.com/ ComputerJobs.com - http://computerjobs.com/ Match.com - http://www.match.com National Health Services (UK) - http://www.nhs.uk/ CarrerBuilder.com - http://www.careerbuilder.com/

    Read the article

  • razor websites not working and all dlls are present

    - by Michael Tot Korsgaard
    I've uploaded a .cshtml website to a surftown server, and I got some problems running the code. But I have a problem with it running the Razor code. This is how the page renders:(Default.cshtml) I've already checked for internal communication problems. And this is my result: But then why isn't it working, and how can I fix it? I've heard that it can be a problem with views but how whould I fix this if that's the case? My websites folder tree: (And some files too) App_Code App_Data packages Microsoft.AspNet.Razor.2.0.20710.0 Microsoft.Asp.Net.WebPages.2.0.20710.0 Microsoft.Asp.Net.WebPages.Administration.2.0.20710.0 Microsoft.Asp.Net.WebPages.Data.2.0.20710.0 Microsoft.Asp.Net.WebPages.WebData.2.0.20710.0 Microsoft.Web.Infrastructure.1.0.0.0 NuGet.Core.1.6.2 bin packages jQuery.2.0.3 Content Scripts Tools Microsoft.AspNet.Mvc.4.0.30506.0 lib net40 Microsoft.AspNet.Razor.2.0.30506.0 lib net40 Microsoft.AspNet.WebPages.2.0.30506.0 lib net40 Pages Chapters Read.cshtml Edit Move Chapter.cshtml Entry.cshtml Entries EnterEntry.cshtml EnterNote.cshtml Login Login.cshtml Search Result.cshtml Scripts Addons TinyMCE Styles CSS Views _Layout.cshtml Default.cshtml My web.config file looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".cshtml" type="System.Web.WebPages.Razor.RazorBuildProvider, System.Web.WebPages.Razor"/> </buildProviders> <assemblies> <add assembly="System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> </system.web> <connectionStrings> <add connectionString="database connection" providerName="System.Data.SqlClient"/> </connectionStrings> </configuration> EDIT: Is it a problem that all my files are .cshtml?

    Read the article

  • code to ping websites works sometimes ...

    - by trustfundbaby
    I'm testing out a piece of code to ping a bunch of websites I own on a regular basis, to make sure they're up. I'm using rails and so far I have this hideous test action that I'm using to try it out (see below). The problem though, is that sometimes it works, and other times it won't ... sometimes it runs through the code just fine, other times, it seems to completely ignore the begin/rescue block ... a. I need help figuring out what the problem is b. And refactoring this to make it look respectable. Your help is much appreciated. require 'net/http' require 'uri' def ping @sites = NewsSource.all @sites.each do |site| if site.uri and !site.uri.empty? uri = URI.parse(site.uri) response = nil path = uri.path.blank? ? '/' : uri.path path = uri.query.blank? ? path : "#{path}?#{uri.query}" begin Net::HTTP.start(uri.host, uri.port) {|http| http.open_timeout = 30 http.read_timeout = 30 response = http.head(path) } if response.code.eql?('200') or response.code.eql?('301') or response.code.eql?('302') site.up = true else site.up = false end site.up_check_msg = response.message site.up_check_code = response.code rescue Errno::EBADF rescue Timeout::Error site.up = false site.up_check_msg = 'timeout' site.up_check_code = '408' end site.up_check_time = 0.seconds.ago site.save end end end

    Read the article

  • Setting up padding for websites in mobile devices

    - by ambrelasweb
    I had finished this website a while ago but something happened to it and I've now spent all day fixing it and getting it back from scratch as my backup wasn't correctly done. I don't quite understand what it's doing as I've done this technique on many other websites with no troubles, maybe I've looked at this website too long? Here is the website. I'm wanting to put some space on the left and right hand side, however I dont just have one container as I was needing the dark grey bar at 100% of the screen and always under the banner no matter where it was. So there are 4 "containing" divs that I want to have the space. I've placed soem CSS3 media queries in but now I'm getting a gap to the right. I was thinking it was because my background mages are going all the way across but they set at 100% so I'm just not understanding whats going on. It's somethign simple, I'm not seeing it right now.. This is what I have for the media queries /* Smartphones (portrait and landscape) ----------- */ @media only screen and (min-device-width : 320px) and (max-device-width : 480px) { #header, #banner, #main, #footer-widget-area { padding: 0 2em 0 2em; } } This is what t looks like on my iPhone Any advice is helpful and appreciated.

    Read the article

  • PropertyGrid: Merging multiple dynamic properties when editing multiple objects

    - by Andrei Stanescu
    Hi, Let's say I have a class A and a class B. I would like to edit using .NET PropertyGrid multiple instances of A and B simultaneously. The desired behavior would be to have the intersection of properties displayed. If A and B have static (written in the source code) properties everything works fine. Selecting A and B instances will only display the intersection of properties. However, if A and B also have dynamic properties (returned as a PropertyDescriptorCollection through the GetProperties() method) the behavior is wrong. When selecting multiple objects I will only see those static properties and none of the dynamic ones. When I select only one instance I can see all properties (static and dynamic). Anybody any ideas? I couldn't find anything on the internet.

    Read the article

  • Algorithm and data structure learning resources for dynamic programming

    - by Pranav
    Im learning dynamic programming now, and while I know the theory well, designing DP algorithms for new problems is still difficult. This is what i would really like now- A book or a website, which poses a problem which can be solved by dynamic programming. Also there is the solution with an explanation available, which i would like to see if i cant solve the problem even after butting my head at it for a few hours. Is there some resource that provides this sort of a thing for several categories of algorithms- like graph algorithms, dynamic programming, etc? P.S. I considered Topcoder, but the solutions there are not really appropriate for learning to implement efficient solutions.

    Read the article

  • How to make schema and code dynamic?

    - by Jonarch
    I want to make my database schema and application code as dynamic as possible to handle "unknown" use cases and changes. Developing in PHP and MySQL. Twice now I have had to change my entire schema including table and column names and this means the developers have to go back to the application code and modify all the SQL queries and table/columns names. So to prevent this I want to if just like we do on pages where we have page content, title bar etc dynamic like a %variable%, can we do it for the schema and maybe even for the php code functions and classes somehow? It takes weeks to re-do all changes like this vs if it is dynamic it can be done in under a day.

    Read the article

  • Problems with dynamic programming

    - by xan
    I've got difficulties with understanding dynamic programming, so I decided to solve some problems. I know basic dynamic algorithms like longest common subsequence, knapsack problem, but I know them because I read them, but I can't come up with something on my own :-( For example we have subsequence of natural numbers. Every number we can take with plus or minus. At the end we take absolute value of this sum. For every subsequence find the lowest possible result. in1: 10 3 5 4; out1: 2 in2: 4 11 5 5 5; out2: 0 in3: 10 50 60 65 90 100; out3: 5 explanation for 3rd: 5 = |10+50+60+65-90-100| what it worse my friend told me that it is simple knapsack problem, but I can't see any knapsack here. Is dynamic programming something difficult or only I have big problems with it?

    Read the article

  • Dynamic SQL queries in code possible?

    - by SeanD
    Instead of hard coding sql queries like Select * from users where user_id =220202 can these be made dynamic like Select * from $users where $user_id = $input. Reason i ask is when changes are needed to table/column names i can just update it in one place and don't have to ask developers to go line by line to find all references to update. It is very time consuming. And I do not like the idea of exposing database stuff in the code. My major concern is load time. Like with dynamic pages, the database has to fetch the page content, same way if queries are dynamic first system has to lookup the references then execute the queries, so does it impact load times? I am using codeignitor PHP. If it it possible then the next question is where to store all the references? In the app, in a file, in the DB, and how?

    Read the article

  • Dynamic control

    - by Geetha
    Hi all, I am creating dynamic label and textbox based on the number of values from the database for the selected item of the dropdownlist. Then the dynamic labels will have the names and the text box with the values. To retain the values of these controls im using Page_init event. So im using cache to hold the selectteditem from the dropdownlist. Problem: The process is going fine. But if i try to refresh the page no items are selected in the dropdown list but the cache is not getting clear so using this cache value dynamic control are creating. Geetha

    Read the article

  • Returning objects with dynamic memory

    - by Caulibrot
    I'm having trouble figuring out a way to return an object (declared locally within the function) which has dynamic memory attached to it. The problem is the destructor, which runs and deletes the dynamic memory when the object goes out of scope, i.e. when I return it and want to use the data in the memory that has been deleted! I'm doing this for an overloaded addition operator. I'm trying to do something like: MyObj operator+( const MyObj& x, const MyObj& y ) { MyObj z; // code to add x and y and store in dynamic memory of z return z; } My destructor is simply: MyObj::~MyObj() { delete [] ptr; } Any suggestions would be much appreciated!

    Read the article

  • Test disk recovery

    - by AIB
    I had a 250GB hard disk having several NTFS partitions. The disk was a dynamic disk (created in windows). Now when I formatted windows (which was in another disk), the dynamic disk is shown as offline. I tried using the testdisk tool to recover the data and created a partial backup. Testdisk is able to list all partitions in the disk. All partitions are shown as type 'D' (Deleted). I want to change the 'D' to 'P' (Primary), 'L'(Logical), 'E' (Extended) appropriately and build a new partition table. If I can write the partition table to disk, the disk will be of 'basic' type and should be readable in all OS. What should be the appropriate partition types? I checked the files on the partitions and no OS was ound. So none of the partitions were bootable. Will randomly selecting P,L,E hurt the data in anyway?

    Read the article

  • Which software raid modes does each version of Windows 7 support?

    - by Goyuix
    Being familiar with the software raid modes and dynamic disks from the server versions, I was wondering if there is a document or even just common crowd knowledge that indicated what software raid support was available for each version of Windows 7. Also - all the various raid levels supported for booting or just a data recovery mechanism (e.g. you can connect three RAID-5 dynamic disks to an already booted system). I would prefer to stay away from modified/copied DLL's from the server variants, as well - please note - this is Windows software RAID - not fake-raid from your BIOS or an add-on card.

    Read the article

  • Excel 2007 Pivot Tables: Overlapping issue hampers my summary sheet

    - by Mike
    I've created a Workbook that has 5 Pivot Tables (PT). I want to make a summary sheet that holds all these PT's, but when they expand the 'not allowed to overlap issue' causes me updating problems - they don't update/expand effectively. Therefore, can't be printed off easily. The sheet would basically help my users give their bosses a simple quick overview of the larger worksheet - this way they would be more inclined to fill it in (give a little too get a little philosophy). I had thought about using the Camera Tool, but I'm not sure how you could make it dynamic, or whether it can be dynamic with a PT? Any advice, links or step-by-steps are greatly appreciated. Thanks Mike.

    Read the article

  • virtualbox 2 vmware disk

    - by anol
    I have a virtualbox disk I'd like to convert to a vmware disk. The disk is dynamic which makes it a lot more trickier. If I follow the instructions at http://xpapad.wordpress.com/2010/02/21/migrating-from-virtualbox-to-vmware-in-linux, the vdi-to-raw conversion will result in a 2 TB file. I don't even have that much disk space! The first step therefore seems to be a dynamic to static conversion of the virtualbox disk, right? How do I do that or is there perhaps a better way to convert to vmware? Help!

    Read the article

  • Hardware needed for 2000 users? [closed]

    - by Trcx
    I have school assignment that is fairly well defined, requiring us to come up with a plan for an environment serving dynamic web applications to 2000 users, and should be able to scale up to six thousand. I have done plenty of research as far as load balancing, redundancy, UPSs, etc, but am having a hard time figuring out how much hardware is actually needed in the way of physical servers, ram, processing power, etc. The assignment states that the server will have a lot of dynamic code, email, and a database are required, all utilizing the appropriate microsoft service (MS SQL, Exchange, IIS). I already plan on splitting them out on to separate servers, but can't even fathom the hardware requirements of something that large scale. Could someone with experience weight in on this, or point me two some good articles?

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >