Search Results

Search found 14900 results on 596 pages for 'git remote repository'.

Page 280/596 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • homebrew path issue

    - by Shaun Stanislaus
    Master:~ shaunstanislaus$ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press enter to continue ==> Downloading and Installing Homebrew... remote: Counting objects: 82368, done. remote: Compressing objects: 100% (39323/39323), done. remote: Total 82368 (delta 56782), reused 65301 (delta 42220) Receiving objects: 100% (82368/82368), 11.68 MiB | 1.59 MiB/s, done. Resolving deltas: 100% (56782/56782), done. From https://github.com/mxcl/homebrew * [new branch] master -> origin/master HEAD is now at 2ea1a0e smpeg: depends on gtk ==> Installation successful! You should run `brew doctor' *before* you install anything. Now type: brew help Master:~ shaunstanislaus$ brew doctor -bash: /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory Master:~ shaunstanislaus$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Users/shaunstanislaus/Library/Application Support/GoodSync:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/Users/shaunstanislaus/.ec2/bin:/Users/shaunstanislaus/.rvm/bin /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory how do i fix this path issue? i can't use brew command and i think i previously symlink to wrong location. please advice, thank you.

    Read the article

  • Update a tableView with a plist took from another table

    - by Pheel
    Background: I have a tab bar application, which has a tableView as the "heart" of the app. It loads data from a plist and, through a button that checks if there are any updates on the remote plist file, updates the local plist with the remote contents. Then, i have another tableView, that should display only those plist items that have a bool value set to YES. Now i want to add a button to the second table that reloads the plist took from the first table. Expected: When i update the local plist from the first table and when i press the button on the second table, the 2nd table is supposed to update and show the cells with that bool value set to YES. (Note: I set YES as default to some items on plist). What happens: The first table updates its content from remote. The second table shows the old items with the value set to YES. When i press the button to refresh data, it reads the plist fine (by logging it, it has the same contents of the first table -only those set to YES-),but it doesn't update data even if i have [self.tableView reloadData];. When i close the app and open it again, the second table is filled with the right items. :\ Code i'm using: //Reading Plist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *plistPath = [[documentPaths lastObject] stringByAppendingPathComponent:@"myPlist.plist"]; NSFileManager *fMgr = [NSFileManager defaultManager]; if (![fMgr fileExistsAtPath:plistPath]) { plistPath = [[NSBundle mainBundle] pathForResource:@"myPlist" ofType:@"plist"]; } NSMutableArray *returnArr = [NSMutableArray arrayWithContentsOfFile:plistPath]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"isFav == YES"]; for (NSDictionary *sect in returnArr) { NSArray *arr = [sect objectForKey:@"Rows"]; [sect setValue:[arr filteredArrayUsingPredicate:predicate] forKey:@"Rows"]; } [self.tableView reloadData]; } //Refresh data button - (void) refreshTable:(id)sender { NSLog(@"plist read"); [self readPlist]; NSLog(@"refreshed plist:%@",[self readPlist]); [self.tableView reloadData]; } Does anyone know why the table is not updating?

    Read the article

  • Different Scala Actor Implementations Overview

    - by Stefan K.
    I'm trying to find the 'right' actor implementation for my thesis. I realized there is a bunch of them and it's a bit confusing to pick one. Personally I'm especially interested in remote actors, but I guess a complete overview would be helpful to many others. This is a pretty general question, so feel free to answer just for the implementation you know about. I know about the following Scala Actor implementations (SAI). Please add the missing ones. Scala 2.7 (difference to) Scala 2.8 Akka (http://www.akkasource.org/) Lift (http://liftweb.net/) Scalaz (http://code.google.com/p/scalaz/) What are the target use-cases for these SAIs (lightweight vs. "heavy" enterprise framework)? do they support remote actors? What shortcomings do remote actors have in the SAIs? How is their performace? How active is there community? How easy are they to get started? How good is the documentation? How easy are they to extend? How stable are they? Which projects are using them? What are their shortcomings? What are their design principles? Are they thread based or event based (receive/ react) or both? Nested receiveS hotswapping the Actor’s message loop

    Read the article

  • Exposed onsite vs IFD deployments for MS Dynamics CRM

    - by Greg McGuffey
    I'm working for the first time on a MS Dyanmics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options: Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague. Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below). Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question). I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad. So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues? Thanks!

    Read the article

  • Java - Problem in deploying Web Application

    - by Yatendra Goel
    I have built a Java Web Application and packed it in a .war file and tested it on my local tomcat server and it is running fine. But when I deployed it on my client's server, it is showing an error. According to the remote server (my client's server), it is not finding a tld file packed in a jar file which I had placed in WEB-INF/lib directory. But when I checked the WEB-INF/lib directory for the jar file, i found that it was there. The contents of META-INF/MANIFEST.MF is as follows: Manifest-Version: 1.0 Class-Path: I think that there is no need to explicitly mention the classpath of WEB-INF/lib directory as it is in the classpath of any web application by default. Then, why the server can't find the jar file in the lib directory when I deployed it on a remote server and why it is working when I deployed the same application on my local server. I posted a question for this at http://stackoverflow.com/questions/2441254/struts-1-struts-taglib-jar-is-not-being-found-by-my-web-application but found that the problem is unusual as nobody could answer it. So my questions are as follows: Q1. Is WEB-INF/lib still remains on the classpath if I leave the classpath entry blank as shown above in the MANIFEST.MF file or I should delete the classpath entry completely from the file or I should explicitly enter Class-Path: /WEB-INF/lib as the classpath entry? Q2. I have JSP pages, Servlets and some helper classes in the web application. Jsp pages are located at the root. Servlets and helper classes are located in WEB-INF/classes folder. So Is there any problem if my helper classes are located in the WEB-INF/classes folder? Note: Please note that this question is not same as my previous question. It is a follow-up question of my previous question. Both the servers (local and remote) are tomcat servers.

    Read the article

  • Message driven bean not responding until client method is complete

    - by poijoi
    Hi, I have a MDB deployed on Jboss 4.2.2 and a client on the same server that produces messages and expects a reply from the MDB via a temporary queue created before the message is sent. When I run the client, I see that it creates the message, puts it in the queue and waits for the reply (no problem so far) ... but when I check in the logs I see that the timeout is reached and no response is received. When the timeout occurs and the client's method is complete the MDB starts processing the message that should have been processed the moment the client put it in the queue. As a consequence of this timing issue, when the MDB tries to reply to the temp queue, it fails since the client is already gone. If I run the same client from a remote server, I have no problem... The MDB picks up the message from the queue right away and the client receives its response right after the processing is complete. I'm using container managed transactions. I suspect it has something to do with that... I think the client's "send message/receive reply" might be all be considered a transaction before it commits to put the message in the queue... but I'm not sure if this is correct. If this is the case, why did I not see the same behavior from the remote client? is client managed transaction the default setting and that's what my remote server was using? Any idea how to fix this? Thanks in advance! PJ

    Read the article

  • Deploying a WAR to tomcat only using a context descriptor

    - by DanglingElse
    i need to deploy a web application in WAR format to a remote tomcat6 server. The thing is that i don't want to do that the easy way, meaning not just copy/paste the WAR file to /webapps. So the second choice is to create a unique "Context Descriptor" and pointing this out to the WAR file. (Hope i got that right till here) So i have a few questions: Is the WAR file allowed to be anywhere in the file system? Meaning can i copy the WAR file anywhere in the remote file system, except /webapps or any other folder of the tomcat6 installation? Is there an easy way to test whether the deployment was successful or not? Without using any browser or anything, since i'm reaching to the remote server only via SSH and terminal. (I'm thinking ping?) Is it normal that the startup.sh/shutdown.sh don't exist? I'm not the admin of the server and don't know how the tomcat6 is installed. But i'm sure that in my local tomcat installations these files are in /bin and ready to use. I mean you can still start/restart/stop the tomcat etc., but not with the these -standard?- scripts. Thanks a lot.

    Read the article

  • Real Time Sound Captureing J2ME

    - by Abdul jalil
    i am capturing sound in J2me and send these bytes to remote system, i then play these bytes on remote system.five second voice is capture and send to remote system. i get the repeated sound again .i am making a sound messenger please help me where i am doing wrong i am using the follown code . String remoteTimeServerAddress="192.168.137.179"; sc = (SocketConnection) Connector.open("socket://"+remoteTimeServerAddress+":13"); p = Manager.createPlayer("capture://audio?encoding=pcm&rate=11025&bits=16&channels=1"); p.realize(); RecordControl rc = (RecordControl)p.getControl("RecordControl"); ByteArrayOutputStream output = new ByteArrayOutputStream(); OutputStream outstream =sc.openOutputStream(); rc.setRecordStream(output); rc.startRecord(); p.start(); int size=output.size(); int offset=0; while(true) { Thread.currentThread().sleep(5000); rc.commit(); output.flush(); size=output.size(); if(size0) { recordedSoundArray=output.toByteArray(); outstream.write(recordedSoundArray,0,size); } output.reset(); rc.reset(); rc.setRecordStream(output); rc.startRecord(); }

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • joomla! migration

    - by tim
    I have a Joomla! installation on a remote site and I want to run it locally. I'm running Joomla! locally on a Mac through a standard MAMP installation: Joomla 1.5.12 PHP 5.2.6 MySQL 5.0.41 Apache 2.0.59 OS X 10.5.8 I've added a configuration file to the local Joomla! directory with all the correct local settings, database name, database user-name, database password etc. etc. I've tried a lot of different settings. I've also recreated the remote database locally, ensuring everything copied correctly. I followed a few different sets of instructions with roughly the same steps on how to do the migration. All of the above has not worked for me; at best I get bits of text from the site rendered in the browser. Other times I get SQL errors. What I want is for an already set up remote Joomla! installation to run on my own local machine. Does anyone have any advice as to how to get this working, it'd be very much appreciated? Thanks.

    Read the article

  • Access problems with IIS 7 and a WCF service

    - by Steve
    I have a Silverlight app that calls a WCF service, the service calls some stored procedures in an SQL db using Visual Studio 2008's Link to SQL class and returns the information to whatever called it. I have set up the compiled project (website with embedded app and the WCF service) on an remote IIS 7 server. I recompiled my local copy to use the WCF service that is now hosted on the IIS box and not the one on the local dev server that Visual Studio provides, if I use the local version of the website (hosted on the dev server, and using the remote SCF service) it is able to make calls it needs and display the information. However, if I use the website that is being hosted by the remote IIS server, the app will not get the information it needs from the service. On the IIS server I have the application pool and the website running under my credentials, which have access to the database. Users connecting to the webpage use anonymous authentication. Any ideas as to why I can only access the service when running from the dev server and not through the remotely hosted webpage are appreciated. If anything needs clarification, please ask.

    Read the article

  • shell script stopped working --- need to rewrite?

    - by OopsForgotMyOtherUserName
    The script below worked on my Mac OS X. I'm now using Ubuntu OS, and the script is no longer working. I'm wondering if there's something that I need to change here? I did change the first line from #!/bin/bash to #!/bin/sh, but it's still throwing up an error.... Essentially, I get an error when I try to run it: Syntax error: end of file unexpected (expecting ")") #!/bin/sh REMOTE='ftp.example.com' USER='USERNAME' PASSWORD='PASSWORD' CMDFILE='/jtmp/rc.ftp' FTPLOG='/jtmp/ftplog' PATTERN='SampFile*' date > $FTPLOG rm $CMDFILE 2>/dev/null LISTING=$(ftp -in $REMOTE <<EOF user $USER $PASSWORD cd download ls $PATTERN quit EOF ) echo "open $REMOTE" >> $CMDFILE echo "user $USER $PASSWORD" >> $CMDFILE echo "verbose" >> $CMDFILE echo "bin" >> $CMDFILE echo "cd download" >> $CMDFILE for FILE in $LISTING do echo "get $FILE" >> $CMDFILE done echo "quit" >> $CMDFILE ftp -in < $CMDFILE >> $FTPLOG 2>&1 rm $CMDFILE

    Read the article

  • Syncing objects between two devices with different system times

    - by Mike Weller
    Hi there. I'm syncing objects between two devices. Objects have a lastModified property. If both devices have modified an object, then during the next sync the version of the object with the most recent lastModified is chosen on both devices. So we don't do fine-grained merging, only 'most recent version' merging. The problem is this. When one device receives a list of changed objects it can't reliably compare the lastModified of received objects to its own because the system times on the two devices may be different. I considered having each device send its current date/time during the sync. Then each calculates the difference between the remote time and the local time to compare the dates properly. But if there is lag between sending a date and the remote device receiving it, this causes incorrect comparisons with objects that were modified at the same time (or very close together in time). i.e. both devices think the remote object is newer and they end up with different objects. I hope I have explained this clearly enough. There must be a common solution to this kind of problem but my brain isn't coming up with anything. Any suggestions? Thanks in advance...

    Read the article

  • Add comment to subversion commit automatically

    - by Paul Alexander
    I've already got my subversion repository set up to require comments of a minimum length to accept a commit. However, I'd like to start tagging those comments with information from our bug tracking system when committed. I've already got the scripts set up to pull data from the bug tracker and just need a way to get that info into the subversion commit comments. How can append to the existing comment in subversion automatically? For reference, the subversion repository is hosted on a linux server with Ubuntu 9 something installed and I have complete root access to the machine.

    Read the article

  • TFS and shared projects in multiple solutions

    - by David Stratton
    Our .NET team works on projects for our company that fall into distinct categories. Some are internal web apps, some are external (publicly facing) web apps, we also have internal Windows applications for our corporate office users, and Windows Forms apps for our retail locations (stores). Of course, because we hate code reuse, we have a ton of code that is shared among the different applications. Currently we're using SVN as our source control, and we've got our repository laid out like this: - = folder, | = Visual Studio Solution -SVN - Internet | Ourcompany.com | Oursecondcompany.com - Intranet | UniformOrdering website | MessageCenter website - Shared | ErrorLoggingModule | RegularExpressionGenerator | Anti-Xss | OrgChartModule etc... So.. The OurCompany.com solution in the Internet folder would have a website project, and it would also include the ErrorLoggingModule, RegularExpressionGenerator, and Anti-Xss projects from the shared directory. Similarly, our UniformOrdering website solution would have each of these projects included in the solution as well. We prefer to have a project reference to a .dll reference because, first of all, if we need to add or fix a function in the ErrorLoggingModule while working on the OurCompany.com website, it's right there. Also, this allows us to build each solution and see if changes to shared code break any other applications. This should work well on a build server as well if I'm correct. In SVN, there is no problem with this. SVN and Visual Studio aren't tied together in the way TFS's source control is. We never figured out how to work this type of structure in TFS when we were using it, because in TFS, the TFS project was always tied to a Visual Studio Solution. The Source Code repository was a child of the TFS Project, so if we wanted to do this, we had to duplicate the Shared code in each TFS project's source code repository. As my co-worker put it, this "breaks every known best practice about code reuse and simplicity". It was enough of a deal breaker for us that we switched to SVN. Now, however, we're faced with truly fixing our development processes, and the Application Lifecycle Management of TFS is pretty close to exactly what we want, and how we want to work. Our one sticking point is the shared code issue. We're evaluating other commercial and open source solutions, but since we're already paying for TFS with our MSDN Subscriptions, and TFS is pretty much exactly what we want, we'd REALLY like to find a way around this issue. Has anybody else faced this and come up with a solution? If you've seen an article or posting on this that you can share with me, that would help as well. As always, I'm open to answers like "You're looking at it all wrong, bonehead, HERE'S the way it SHOULD be done.

    Read the article

  • Intellij Community can't use http proxy for Maven

    - by MikeHoss
    I have Intellij IDEA Community installed on a Linux box that needs to use an authenticated proxy to get to the Internet. I have a system-wide proxy on the box that works, and I have the proxy configured in ~/.m2/settings.xml. Maven correctly uses the proxy when I run try it from the command-line. I have the same proxy configured within Intellij and it gives me the plugins listing correctly. But when I try to sync with the Maven repository withing Intellij I keep getting this: [WARNING] Unable to get resource 'org.codehaus.mojo:hibernate3-maven-plugin:pom:2.2' from repository restlet (http://maven.restlet.org): Authorization failed: Not authorized by proxy. I went to Settings-Maven and put in the proxy info as properties and that didn't work. I can see by looking at those settings that Intellij is reading my ~./m2/settings.xml fine because it knows where my local repo is (it's in a non-standard place). Anyone know how I can get this working?

    Read the article

  • How can you inject an asp.net (mvc2) custom membership provider using Ninject?

    - by AlDev
    OK, so I've been working on this for hours. I've found a couple of posts here, but nothing that actually resolves the problem. So, let me try it again... I have an MVC2 app using Ninject and a custom membership provider. If I try and inject the provider using the ctor, I get an error: 'No parameterless constructor defined for this object.' public class MyMembershipProvider : MembershipProvider { IMyRepository _repository; public MyMembershipProvider(IMyRepository repository) { _repository = repository; } I've also been playing around with factories and Initialize(), but everything is coming up blanks. Any thoughts/examples?

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • Some help needed with setting up the PERFECT workflow for web development with 2-3 guys using subver

    - by Roeland
    Hey guys! I run a small web development company along side with my brother and friend. After doing extensive research I have decided on using subversion for version control. Here is how I currently plan on running typical development. Keep in mind there are 3 of us each in a separate location. I set up an account with springloops (springloops.com) subversion hosting. Each time I work on a new project, I create a repository for it. So lets say in this case I am working on site1. I want to have 3 versions of the site on the internet: Web Development - This is the server me and the other developers publish to. (site1.dev.bythepixel.com) Client Preview - This is the server that we update every few days with a good revision for the client to see. (site1.bythepixel.com) Live Site - The site I publish to when going live (site1.com) Each web development machine (at each location) will have a local copy of xamp running virtual host to allow multiple websites to be worked on. The root of the local copy is set up to be the same as the local copy of the subversion repository. This is set up so we can make small tweaks and preview them immediately. When some work has been done, a commit is made to the repository for the site. I will have the dev site automatically be pushed (its an option in springloops). Then, whenever I feel ready to push to the client site I will do so. Now, I have a few concerns with those work flow: I am using codeigniter currently, and in the config file I generally set the root of the site. Ex. http://www.site1.com. So, it looks like each time I publish to one of the internet servers, I will have to modify the config file? Is there any way to make it so certain files are set for each server? So when I hit publish to client preview it just uploads the config file for the client preview server. I don't want the live site , the client preview site and the dev site to share the same mysql server for a variety of reasons. So does this once again mean that I have to adjust the db server info each time I push to a different site? Does this workflow make sense? If you have any suggestion please let me know. I plan for this to be the work flow I use for the next few year. I just need to put a system in place that allows for future expansion! Thanks a bunch!!

    Read the article

  • Maven Assembly Plugin - install the created assembly

    - by Walter White
    I have a project that simply consists of files. I want to package those files into a zip and store them in a maven repository. I have the assembly plugin configured to build the zip file and that part works just fine, but I cannot seem to figure out how to install the zip file? Also, if I want to use this assembly in another artifact, how would I do that? I am intending on calling dependency:unpack, but I don't have an artifact in the repository to unpack. How can I get a zip file to be in my repository so that I may re-use it in another artifact? parent pom <build> <plugins> <plugin> <!--<groupId>org.apache.maven.plugins</groupId>--> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <configuration> <filters> <filter></filter> </filters> <descriptors> <descriptor>../packaging.xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> Child POM <parent> <groupId>com. ... .virtualHost</groupId> <artifactId>pom</artifactId> <version>0.0.1</version> <relativePath>../pom.xml</relativePath> </parent> <name>Virtual Host - ***</name> <groupId>com. ... .virtualHost</groupId> <artifactId>***</artifactId> <version>0.0.1</version> <packaging>pom</packaging> I filtered the name out. Is this POM correct? I just want to bundle files for a particular virtual host together. Thanks, Walter

    Read the article

  • m2eclipse workspace resolution

    - by Bartosz Radaczynski
    Hi all, I am using m2eclipse for managing maven projects in eclipse. It seems that in the previous release that I was using (0.9.8) the workspace resolution did not work at all, but right now it also does not work quite as I would expect. Namely, when the "resolve dependencied from workspace" setting for a project is not checked, the project turns red and cannot be build. The message says: artifact xxx x.y-SNAPSHOT cannot be found int local repository (or something to that extent). The trouble is that m2eclipse is putting information about workspace project into my local repo. Is there a way to change this behaviour? P.S. The workaround for this is to close the xxx project, then m2eclipse resolved the dependency to whatever version I've had previously in the local repository (i.e. the non-snapshot version).

    Read the article

  • Handling Model Inheritance in ASP.NET MVC2 & NHibernate

    - by enth
    I've gotten myself stuck on how to handle inheritance in my model when it comes to my controllers/views. Basic Model: public class Procedure : Entity { public Procedure() { } public int Id { get; set; } public DateTime ProcedureDate { get; set; } public ProcedureType Type { get; set; } } public ProcedureA : Procedure { public double VariableA { get; set; } public int VariableB { get; set; } public int Total { get; set; } } public ProcedureB : Procedure { public int Score { get; set; } } etc... many of different procedures eventually. So, I do things like list all the procedures: public class ProcedureController : Controller { public virtual ActionResult List() { IEnumerable<Procedure> procedures = _repository.GetAll(); return View(procedures); } } but now I'm kinda stuck. Basically, from the list page, I need to link to pages where the specific subclass details can be viewed/edited and I'm not sure what the best strategy is. I thought I could add an action on the ProcedureController that would conjure up the right subclass by dynamically figuring out what repository to use and loading the subclass to pass to the view. I had to store the class in the ProcedureType object. I had to create/implement a non-generic IRepository since I can't dynamically cast to a generic one. public virtual ActionResult Details(int procedureID) { Procedure procedure = _repository.GetById(procedureID, false); string className = procedure.Type.Class; Type type = Type.GetType(className, true); Type repositoryType = typeof (IRepository<>).MakeGenericType(type); var repository = (IRepository)DependencyRegistrar.Resolve(repositoryType); Entity procedure = repository.GetById(procedureID, false); return View(procedure); } I haven't even started sorting out how the view is going to determine which partial to load to display the subclass details. I'm wondering if this is a good approach? This makes determining the URL easy. It makes reusing the Procedure display code easy. Another approach is specific controllers for each subclass. It simplifies the controller code, but also means many simple controllers for the many procedure subclasses. Can work out the shared Procedure details with a partial view. How to get to construct the URL to get to the controller/action in the first place? Time to not think about it. Hopefully someone can show me the light. Thanks in advance.

    Read the article

  • Subversion lock-modify-unlock solution for SSIS .dtsx

    - by EasyDot
    Hello! I wonder how i could set up a developer enviroment for SSIS,.dtsx packages in Subversion? I read about Subversion "svn:needs-lock" property and the ability to set auto-props in a subversion repository by setting "enable-auto-props = yes" in the repository config file. The "svn:needs-lock" property is neccesary when working with SSIS,dtsx to handle the files like binary files wich must be locked to avoid mergingconflicts. How should i configure Subversion config file for this kind of development? An example for setting auto-prop svn:needs-lock to .doc files (I think its working?!): [miscellany] enable-auto-props = yes [auto-props] *.doc = svn:mime-type=application/msword;svn:needs-lock=*

    Read the article

  • Best Practices for Source Control Dependencies

    - by VirtuosiMedia
    How do you handle source control setup of a project that has dependency on a separate framework or library? For example, Project A uses Framework B. Should Project A also include the code from Framework B in its repository? Is there a way for it to be included automatically from a different repository or would I have to updated it manually? What are the general approaches are usually taken for this scenario? Assume that I control the repositories for both Project A and Framework B and that the source code for both is not compiled. Any resources or suggestions would be greatly appreciated. I'm currently using Subversion (on a very basic level), but I would like to switch to Mercurial so that I can try out Kiln with Fogbugz. Edit: In Mercurial, would you use parent repositories for this function?

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >