Search Results

Search found 1031 results on 42 pages for 'rewriting'.

Page 27/42 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • JQuery / JSON + .Net Service Layer - to WCF or Not to WCF?

    - by hanzolo
    I Recently had a discussion with a colleague of mine about the pros / cons of WCF. He mentioned about how much code is generated to support WCF, and also the overhead required. It was mentioned that a simple jQuery /Ajax post to a .aspx page (or a handler for that matter) that returns JSON would work more efficiently and takes much less code to implement. I am also aware of the new WCF Web API and feel that technology may solve the "bloated"-ness required in attaining a proxy etc... by just outputting JSON. So when developing a relational DB (MSSQL) storage model, with a fairly complex Business Layer (C#) and Data Access Layer (EntityFW).. what's a good technology for creating a "service layer" which will spit out View Models represented in JSON, with a CQRS(Command Query..) approach in mind.. The app would use the service layer to support it's required UI, as well as provide an available subset of services (outputting JSON data) for service subscribers.. In other words an admin panel to support the admin UI, and service endpoints that return JSON to access the configurations made from the administration UI. What are some potential technologies to use as the transport / communication layer. I'd like to use a pure RESTful approach, but am not against doing some URL rewriting with IIS. Obviously some of the available technologies are: WCF WCF Web API (should this even be separate?) Straight request / response (query string to .aspx / handler) Would using MVC .Net solve this entire problem? maybe their single page app approach? any suggestions / feedback from developing this type of application? Thanks,

    Read the article

  • Apache mod_proxy_html Substitute: how to re-use part of regex match? (regex variables?)

    - by goober
    Hi all, Have a unique URL-rewriting situation in Apache. I need to be able to take a URL that starts with "\u002f[X]" or '\u002f[X]" Where X is the rest of some URL, and substitute the text "\u002fmeis2\u002f[X] I'm not sure how the Regex works in Apache -- I think it's the same as Perl 5? -- but even then I'm a little unsure how this would be done. My hunch is that it has to do with Regex grouping and then using $1 to pull the variable out, but I'm entirely unfamiliar with this process in Apache. Hoping someone can help -- thanks!

    Read the article

  • Inheriting projects - General Rules? [closed]

    - by pspahn
    Possible Duplicate: When is a BIG Rewrite the answer? Software rewriting alternatives Are there any actual case studies on rewrites of software success/failure rates? When should you rewrite? We're not a software company. Is a complete re-write still a bad idea? Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • htaccess rewrite and auth conflict

    - by Michael
    I have 2 directories each with a .htaccess file: html/.htaccess - There is a rewrite in this file to send almost everything to url.php RewriteCond %{REQUEST_URI} !(exported/?|\.(php|gif|jpe?g|png|css|js|pdf|doc|xml|ico))$ RewriteRule (.*)$ /url.php [L] and html/exported/.htaccess AuthType Basic AuthName "exported" AuthUserFile "/home/siteuser/.htpasswd" require valid-user If I remove html/exported/.htaccess the rewriting works fine and the exported directory can be access. If I remove html/.htaccess the authentication works fine. However when I have both .htaccess files exported/ is being rewritten to /url.php. Any ideas how I can prevent it?

    Read the article

  • Suddely internet is not accessible

    - by user189708
    I am going crazy here. One day everything was working fine. I turned pc off and went to sleep. Next day turn pc on and cannot access internet (from any browser). The situation is: I cannot open any webpage from browser (tried Firefox and Epiphany) and cannot receive emails in thunderbird. BUT if I run firefox from console as sudo, I can use it as usual. I can access Skype and pretty much any other network stuff (like installing software with apt-get etc.), also if I use Astrill VPN software I can access webpages even running without sudo. I haven't install any software or anything like that for several days = I have not a clue what could cause this. Just by the way, other Win PC in our home has no issue. Here is what I have tried to fix this: I have tried to restart my pc, router, modem - multiple times I have tried to change permissions to my firefox profile I have tried to completely re-install firefox and start with blank profile, thus no addons I have tried to change /etc/resolv.conf to an IP of my router (it was 127.0.1.1) I have tried to change my hostname (from tomino-NB to tominoNB) I think I might try even more stuff. None of it works. Can someone please try to help me. Thank you UPDATE 1: I have tried this: Removing resolv.conf - Didn't help Also "ping" and "dig" commands cannot resolve host UPDATE 2: I have tried to edit nameservers in resolv.conf but still no effect. I can ping router as well as I can ping outside IP. So definitely just some DNS issue. Is it possible that something is rewriting path to resolv.conf and using different file? UPDATE 3: I have just restarted PC and everything works now... resolv.conf went back to nameserver 127.0.1.1 . I have no clue what happened that it works again...

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Order of mod_rewrite rules in .htaccess not being followed

    - by user39461
    We're trying to enforce HTTPS on certain URLs and HTTP on others. We are also rewriting URLs so all requests go through our index.php. Here is our .htaccess file. # enable mod_rewrite RewriteEngine on # define the base url for accessing this folder RewriteBase / # Enforce http and https for certain pages RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # rewrite all requests for file and folders that do not exists RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?query=$1 [L,QSA] If we don't include the last rule (RewriteRule ^(.*)$ index.php?query=$1 [L,QSA]), the HTTPS and HTTP rules work perfectly however; When we add the last three lines our other rules stop working properly. For example if we try to goto https:// www.domain.com/en/customer/login, it redirects to http:// www.domain.com/index.php?query=en/customer/login. It's like the last rule is being applied before the redirection is done and after the [L] flag indicating the the redirection is the last rule to apply.

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • Can I save an Apache environment variable value with SetEnv?

    - by Nicholas Tolley Cottrell
    I am running Apache 2.2 with Tomcat 6 and have several layers of URL rewriting going on in both Apache with RewriteRule and in Tomcat. I want to pass through the original REQUEST_URI that Apache sees so that I can log it properly for "page not found" errors etc. In httpd.conf I have a line: SetEnv ORIG_URL %{REQUEST_URI} and in the mod_jk.conf, I have: JkEnvVar ORIG_URL Which i thought should make the value available via request.getAttribute("ORIG_URL") in Servlets. However, all that I see is "%{REQUEST_URI}", so I assume that SetEnv doesn't interpret the %{...} syntax. What is the right way to get the URL the user requested in Tomcat?

    Read the article

  • sSMTP Configuration Question

    - by SevenCentral
    I've installed sSMTP on Ubuntu 10.04 via: sudo apt-get install ssmtp My configuration file is: # # Config file for sSMTP sendmail # # The person who gets all mail for userids < 1000 # Make this empty to disable rewriting. [email protected] # The place where the mail goes. The actual machine name is required no # MX records are consulted. Commonly mailhosts are named mail.domain.com mailhub=smtp.gmail.com:587 # Where will the mail seem to come from? #rewriteDomain= # The full hostname hostname=somedomain.com # Are users allowed to set their own From: address? # YES - Allow the user to specify their own From: address # NO - Use the system generated From: address #FromLineOverride=YES [email protected] authpass=**** usestarttls=yes Am I transmitting my credentials in clear text? Is calling ssmtp a secure operation? Thanks.

    Read the article

  • Change Envelope From to match From header in Postfix

    - by lid
    I am using Postfix as a gateway for my domain and need it to change or rewrite the Envelope From address to match the From header. For example, the From: header is "[email protected]" and the Envelope From is "[email protected]". I want Postfix to make the Envelope From "[email protected]" before relaying it on. I took a look at the Postfix Address Rewriting document but couldn't find anything that matched my use case. (In case you're curious why I need to do this: Gmail uses the same Envelope From when sending from a particular account, no matter which From: address you choose to use. I would prefer not to disclose the account being used to send the email. Also, it messes with SPF/DMARC domain alignment - see 4.2.2 of the DMARC draft spec.)

    Read the article

  • Mod_rewrite pretty url when domain/foo is a directory

    - by ModRewriter
    Starting with something as simple as: RewriteRule ^(.*)$ index.php?page=$1 What if I also want the following to work: RewriteRule ^/foo$ /index.php?page=foo #/foo IS a directory This seem to work ONLY if the R flag is set, but then the full non-pretty url is written. Thus it seems I can REDIRECT existing directory, but not rewrite them... Maybe with an .htaccess inside the directory itself? Or some PHP magic in /foo/index.php like header(/index.php?page=foo)? Will it work? Will it be HTTP standard/search engine optimized? Please help! PS: The oddest idea occurred to me: redirecting /foo to /not-a-dir, and then rewriting /not-a-dir to /index.php?p=foo should theorically work... But... Come on... Really?!?

    Read the article

  • Apache 410 Gone instructions not working with mod_alias nor mod_rewrite

    - by Peter Boughton
    Apache 2.2 seems to be ignoring instructions to return a 410 status. This happens for both mod_alias's Redirect (using 410 or gone) and mod_rewrite's RewriteRule (using [G]), being used inside a .htaccess file. This works: Redirect 302 /somewhere /gone But this doesn't: Redirect 410 /somewhere That line is ignored (as if it had been commented) and the request falls through to other rules (which direct it to an unrelated generic error handling script). Similarly, trying to use a RewriteRule with a [G] flag doesn't work, but the same rule rewriting to a script that generates a 410 does - so the rules aren't the problem and it seems instead to be something about 410/gone that isn't behaving. I can workaround it by having a script sending the 410, but that's annoying and I don't get why it's not working. Any ideas?

    Read the article

  • Reg Expression htaccess RewriteRule

    - by Rick
    I am new to using regular expressions for rewriting URL's in htaccess I need to redirect mysite.com/123 to mysite.com/, IF cookie named 'ref' is set. my current htaccess is: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_COOKIE} ref=true [NC] RewriteRule ^/([0-9]+)/$ http://www.mysite.com </IfModule> The goal is that when someone enters site with: mysite.com/111(some number) that they are redirected to the home page of the site after the cookie is set. Be nice... I'm new! ;o)

    Read the article

  • How to deal with a CEO making all technical decision but with little technical knowledge ?

    - by anonymous
    Hi, Question posted anonymously for obvious reasons. I am working in a company with a dev group of 5-6 developers, and I am in a situation which I have a hard time dealing with. Every technical choice (language, framework, database, database scheme, configuration scheme, etc...) is decided by the CEO, often without much rationale. It is very hard to modify those choices, and his main argument consists in "I don't like this", even though we propose several alternative with detailed pros/cons. He will also decide to rewrite from scratch our core product without giving a reason why, and he never participates to dev meetings because he considers it makes things slower... I am already looking at alternative job opportunities, but I was wondering if there anything we (the developers) could do to improve the situation. Two examples which shocked me: he will ask us to implement something akin to configuration management, but he reject any existing framework because they are not written in the language he likes (even though the implementation language is irrelevant). He also expects us to be able to write those systems in a couple of days, "because it is very simple". he keeps rewriting from scratch on his own our core product because the current codebase is too bad (codebase whose design was his). We are at our third rewrite in one year, each rewrite worse than the previous one. Things I have tried so far is doing elaborate benchmarks on our product (he keeps complaining that our software is too slow, and justifies rewrites to make it faster), implement solutions with existing products as working proof instead of just making pros/cons charts, etc... But still 90 % of those efforts go to the trashbox (never with any kind of rationale behind he does not like it, again), and often get reprimanded because I don't do exactly as he wants (not realizing that what he wants is impossible).

    Read the article

  • Benefits of Server-side Coding

    There are numerous advantages to server scripting languages over client side languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripting are scripts that are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9 years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server-side scripts and JavaScript used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid. However, I would have to say that my personal experience would have to sway me in deciding what type of languages to use for form validation because they both have advantages and disadvantages based on the each situation.

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

  • Class Design - Space Simulator

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. So I have two questions: First the broad one: Does anyone have suggestions as to sources for learning about good programming habits and techniques? I'd prefer it if the resource wasn't a 5000 page tome. The more I can read it in installments the better. More specifically: I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • Are the only types of data "sources" static and dynamic?

    - by blunders
    Thinking that there might be others, but not sure -- but before getting into that, let me explain what I mean by static and dynamic data sources. Static (or datastore) - Meaning that the data's state is non-changing, and if was changed, that would be a new state, and the old data would be considered stateless; meaning it no longer is known to exist, or not exist. Another way of possibly looking at a static data source might be that if read and written back without modification, the checksum for before and after should be exactly the same regardless of the duration of time between the reading and rewriting of the data. Examples: Photos, Files, Database Record, Dynamic (or datastream) - Meaning that the data's state is known to be in flux, and never expected to be the same per input. Example: Live video/audio feed, Stock Market feed, First let me say, the above is a very loose mapping of the concepts, and I'd welcome any feedback. Next, onto the core of the question, that being are these the only two types of data sources. My guess, is that yes, they are -- but that there are hybrid versions of the two. That being, streaming data that has a fixed state. For example, the data being streamed has a checksum given and each unique checksum is known to be a single instance of static data. On the flip side, static data could be chained via say a version control system; when played back, each version might be viewed as a segment of a stream; thing is, the very fact that it can be played back makes the data source static. Another type might be that the data source is being organically discovered, and it's simply unknown what the state is. Questions, feedback, requests -- just comment, thanks!!

    Read the article

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >