Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 41/549 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Spring MVC - JSP - Place to Store Environment Specific Constants

    - by jboyd
    Where in the Spring-MVC/JSP application would you store things that need to be accessed by both the controllers and views such as environment specific base_url's, application ids to be used in javascript and so on? I've tried creating an application scoped bean and then at the top of my JSPs, but that doesn't seem to be working. <!-- Environment --> <bean id="myEnv" class="com.myapp.MyAppEnvironment" scope="application"> <property name="baseUrl" value="http://localhost:8080/myapp/"/> <property name="videoPlayerId" value="234346565"/> </bean> And using it in the following manner <jsp:useBean id="myEnv" scope="application" type="com.myapp.MyAppEnvironment"/>

    Read the article

  • Should we create Virtual Machine environment so a consultant can develop in similiar environment?

    - by ChrisNel52
    This is a large project and currently there are only 3 developers working on it. We have some money in the budget to contract development help from a software consulting firm. However, because the location of our business it would be beneficial if the consultant could do their development off-site. Also, our company policy doesn't allow contract help to VPN into our network, so that is not an option. My question is, would it be a good idea to create a Virtual Machine that copies our internal environment (particularly our database and WCF service) and give the consultant the Virtual Machine image so that they can replicate the environment at their place of work? I haven't worked much with Virtual Machines, so I'm not sure if this is a good idea or if there are huge obstacles that I'm not thinking of. If anyone has ever done anything like this, it would be great to hear the pros/cons. Any help would be appreciated.

    Read the article

  • Autotest notifications on Ubuntu virtual environment

    - by Luciano
    I am having trouble getting Rails autotest notifications to work on the Engine Yard Vagrant environment. On the Mac, I normally get the notifications via Growl. However, on the virtual environment (which runs Ubuntu) that doesn't work. I tried running Linux notification setups such as libnotify+autotest-notification, but I get the following error: libnotify-Message: Unable to get session bus: /bin/dbus-launch terminated abnormally with the following error: Autolaunch error: X11 initialization failed. ** (notify-send:1004): CRITICAL **: dbus_g_proxy_connect_signal: assertion `DBUS_IS_G_PROXY (proxy)' failed ** (notify-send:1004): CRITICAL **: dbus_g_proxy_connect_signal: assertion `DBUS_IS_G_PROXY (proxy)' failed ** (notify-send:1004): CRITICAL **: dbus_g_proxy_call: assertion `DBUS_IS_G_PROXY (proxy)' failed Another path would be to have Growl receive the notifications remotely, but I don't even know where to begin with that... Any suggestions?

    Read the article

  • Preferred Windows Java Development Environment

    - by JF
    I've been a Linux Java developer for years and have loved it. I just got a new laptop which is running Windows 7. I could wipe the drive and go back to my typical Linux dev setup: vim for editing, tabbed Bash windows running javac and java for smaller projects, ant for big projects That said, I'm really thinking it couldn't hurt to learn to develop in a new environment. So, with that in mind, are there any Windows-based Java devs out there? What setup do you like to use to get things done? It'd be interesting to hear both ways to emulate my Linux-based environment as well as completely different styles that I might benefit from trying.

    Read the article

  • Passenger problem: "no such file to load" -- /config/environment

    - by Mason Jones
    I've been researching this one and found references to similar problems here and there, but none of them has led to a solution yet. I've installed passenger (2.2.11) and nginx (0.7.64) and when I start things up and hit a Rails URL, I get an error page informing me of a load error: no such file to load -- /path/to/app/config/environment From what I've found online this appears to be some sort of a user/permissions error, but I've tried all the logical fixes: I've made sure that /config/environment.rb is not owned by root, but by a webapp user. I've tried setting passenger_default_user, I've tried setting passenger_user_switching off. I've even tried setting the nginx user, though that shouldn't matter much. I've gotten some differing results, but nothing's actually worked. I'm hoping someone may have the magical combination of settings and permissions for this. I may try backing down to an earlier version of Passenger, because I've never had this issue before; it's been a little while since I set up Passenger though. Thanks for any suggestions.

    Read the article

  • Setting Environment Variables For NMAKE Before Building A 'Makefile Solution'

    - by John Dibling
    I have an MSVC Makefile Project in which I need to set an environment variable before running NMAKE. For x64 builds I needs to set it to one value, and for x86 builds I need to set it to something else. So for example, when doing a build I would want to SET PLATFORM=win64 if I'm building a 64-bit compile, or SET PLATFORM=win32 if I'm building 32-bit. There does not appear to be an option to set environment variables or add a pre-build even for makefile projects. How do I do this? EDIT: Running MSVC 2008

    Read the article

  • How to prevent command/script from changing global environment

    - by guillermooo
    I need to run scriptblocks/scripts from the current top-level shell and I want them to leave the global environment unmodified. So far, I've only been able to think of the following possibilities: powershell -file <script> powershell -noprofile -command <scriptblock> The problem is, that they are very slow. For instance, I would like to be able to do: mkdir newdir cd newdir $env:NEW_VAR = 100 ni -item f 'newfile.txt' ...so that my shell's working dir wouldn't change and $env:NEW_VAR wouldn't be set in the global environment. Are there any more alternatives to accomplish this?

    Read the article

  • How to check whether your code environment on Windows or on Linux or other OS

    - by justjoe
    hi, right now, i code custom wordpress theme and testing it in xampp windows XP on apache server. But as long as i concern, there's no wp build-in function to identify the code environment. Is there's any PHP build-in function to identify such thing ? for the record, what i want to code need to read a directory. in my apache (in windows), the path will be c:/xampp/htdocs where apache on linux will be \somepath\somepath\ so, is there any code solution to know what is the OS environment without i have to compare the path ? i hope it will also work on other OS with other webserver then APACHE such as IIS

    Read the article

  • ExpandEnvironmentStrings Not Expanding My Variables

    - by Adam Driscoll
    I have a process under the Run key in the registry. It is trying to access an environment variable that I have defined in a previous session. I'm using ExpandEnvironmentStrings to expand the variable within a path. The environment variable is a user profile variable. When I run my process on the command line it does not expand as well. If I call 'set' I can see the variable. Some code... CString strPath = "\\\\server\\%share%" TCHAR cOutputPath[32000]; DWORD result = ExpandEnvironmentStrings((LPSTR)&strPath, (LPSTR)&cOutputPath, _tcslen(strPath) + 1); if ( !result ) { int lastError = GetLastError(); pLog->Log(_T( "Failed to expand environment strings. GetLastError=%d"),1, lastError); } When debugging Output path is exactly the same as Path. No error code is returned. What is goin on?

    Read the article

  • What are the difference between Cygwin on windows and real UNIX environment

    - by Tarun
    Hi, I am a C/C++ developer. I have never done C++ programming on UNIX, I have done only on windows. I want to practice C++ on Unix. (Because all big companies ask C++ with Unix). I have a laptop on which i do not want to install any other OS (because i have installed very important software on it and i don't have setups) So, I searched and found CygWin which is Unix emulator for Windows. I am thinking to practice C++ on this. Please help me, how can I practice/learn in more close to the environment(Unix Environment) that is used in Big companies like IBM. What will be the difference between Unix and Cygwin?

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • JSP et Servlets efficaces : production de sites dynamiques en Java de Jean-Luc Déléage, critique par Benwit

    A l'occasion de ma critique de l'ouvrage JSP et Servlets efficaces : Production de sites dynamiques en Java, j'aimerai vous demander comment vous avez appris à coder des sites web en Java ? Citation: Ce livre s'adresse aux développeurs qui utilisent Java dans la production de sites et à ceux qui souhaitent découvrir l'aspect serveur web. Il permettra aussi un apprentissage concret de ces technologies aux étudiants en informatique en fin de licence et en mas...

    Read the article

  • How to test issues in a local development environment that can only be introduced by clustering in production?

    - by Brian Reindel
    We recently clustered an application, and it came to light that because of how we're doing SSL offloading via the load balancer in production it didn't work right. I had to mimic this functionality on my local machine by SSL offloading Apache with a proxy, but it still isn't a 1-to-1 comparison. Similar issues can arise when dealing with stateful applications and sticky sessions. What would be the industry standard for testing this kind of production "black box" scenario in a local environment, especially as it relates to clustering?

    Read the article

  • jQuery Mobile fin prêt pour la production, la version 1.0 finale de l'UI pour appareils mobiles est 30 à 50 % plus rapide depuis la RC2

    jQuery mobile fin prêt pour la production La version 1.0 finale de l'UI pour appareils mobiles est 30 à 50 % plus rapide depuis la RC2 Mise à jour du 18 novembre 2011 par Idelways Au terme de plus d'une année de « raffinements », jQuery Mobile dépasse les phases de test et sort pour la production « solide comme du roc », a annoncé Todd Parker, membre de la Core-team du projet jQuery, leader de jQuery UI. Après 5 alpha, 3 Beta et 3 RC, jQuery Mobile 1.0 supporte tous les plateformes et navigateurs mobiles populaires pour smartphones, tablettes et liseuses (e-Readers). Il a aussi été testé sur les différents ...

    Read the article

  • Azure : Mobiles Services et Web Sites entrent en production, l'infrastructure stocke 8,5 trillions d'objets et gère 900 000 transactions par seconde

    Windows Azure : Mobiles Services et Web Sites entrent en production L'infrastructure stocke 8,5 trillions d'objets et gère 900 000 transactions par secondeDisponible en Preview depuis août 2012, Windows Azure Mobiles Services est passé en disponibilité générale (GA) avec Windows Azure Web Sites. Une étape qui marque l'entrée de ces services en phase de production. Pour rappel, Windows Azure Mobile Services est une plateforme Backend as a service (BaaS), qui fournit une solution clef en main dans le Cloud, permettant d'accélérer le développement d'applications connectées côté client.

    Read the article

  • Git push current branch to a remote with Heroku

    - by cmaughan
    I'm trying to create a staging branch on Heroku, but there's something I don't quite get. Assuming I've already created a heroku app and setup the remote to point to staging-remote, If I do: git checkout -b staging staging-remote/master I get a local branch called 'staging' which tracks staging-remote/master - or that's what I thought.... But: git remote show staging-remote Gives me this: remote staging Fetch URL: [email protected]:myappname.git Push URL: [email protected]:myappname.git HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': staging-remote merges with remote master Local ref configured for 'git push': master pushes to master (up to date) As you can see, the pull looks reasonable, but the default push does not. It implies that if I do: git push staging-remote I'm going to push my local master branch up to the staging branch. But that's not what I want.... Basically, I want to merge updates into my staging branch, then easily push it to heroku without having to specify the branch like so: git push staging-remote mybranch:master The above isn't hard to do, but I want to avoid accidentally doing the previous push and pushing the wrong branch... This is doubly important for the production branch I'd like to create! I've tried messing with git config, but haven't figured out how to get this right yet...

    Read the article

  • XP machines on Domain not reporting WMI Data in a 2003 Server Environment

    - by Az
    I am running into a very quirky issue and I hope someone out there can help. We use a monitoring program for several networks we oversee that is WMI data dependent for a great deal of it's functionality. The Windows 2000 Professional workstations, as well as the 2003 servers in our network report WMI data fine, the Windows XP professional machines will not let me view them from within the WMI snap in for MMC (they return a Win32: Access Denied) error. I am of course logged in with an account with domain admin privileges on the domain controller when I attempt it. DCOM is enabled in component services, and the remote security option is set to allow as well. If we remove the machine from the domain and rejoin it, some workstations will show up as WMI enabled temporarily and then when I try to access them again later I get the access denied error again out of the blue. Hoping someone out there has had a similar problem or they have advice. I have had this problem with the firewall turned on or off. Thanks for your time! -Az

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • grep is inconsistently defaulting to grep -P?

    - by Sammitch
    I have a script that does some housekeeping that works perfectly well when invoked from an interactive shell, but did nothing when invoked by cron. To troubleshoot this I started a shell with a 'blank' environment with the command: env -i /bin/bash --noprofile --norc Using this blank env I've dug into my script and found that the following grep will not match any files: grep -il "^ws_status\s*=\s*[\"']remove[\"']$" However, when run from an interactive shell the command will return the filenames of the matching files. As a note, the expression is matching lines like: WS_STATUS = "remove" Through trial-and-error I discovered that adding -P to the options [Perl regex] the command started working normally in the 'blank' shell. However, I have no idea why my login shell appears to be defaulted to grep -P. There is only one grep binary, /bin/grep There are no aliases defined for grep=pgrep or grep="grep -P" There is no env variable GREP_OPTIONS defined. What's the deal here? Note: OS is RHEL v5.10, Bash is v3.2.25, grep is v2.5.1

    Read the article

  • P2v options within a hyper-v environment.

    - by tony roth
    I have a server that san boots that I want to p2v. I have many options disk2vhd, scvmm etc but I was thinking about cloning the lun (flexclone, netapp) presenting it to my hyper-v r2 server. Within the hv manager do a create new disk then have it copy the cloned lun to a vhd file. Then do the bcdedit\bootsect stuff to it. Should work right? I'm also curious if anybodys booting vhd's that are on bootable luns? I've booted native vhd's just fine was just curious about the running them off a bootable lun. I think that this has quite a few advantages like instant p2v etc.. any thoughts on this? hmm dang as I was typing this I realized that I should not use the hv manager new disk copy routine, I should just disk2vhd the mounted lun. This has advantages in that it should be a lot faster!! discovered that disk2vhd may be flaky, crashed the first time I ran it! thanks

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Can't access internet using a domain joined computer outside the domain environment

    - by Mike Walsh
    We had an unused box at work so took it home. It had been joined to the domain and hasn't been unjoined. When I try to use it at home (logging in with a local admin account) I can't seem to access internet pages. It gets correct IP and gateway for the local network and correct DNS servers for the home ADSL connection. I can happily ping the home router (which doesn't have any tricky firewall settings). Can't seem to ping outside, get any DNS to resolve, or (obviously) get any web pages. Is there some problem here with this having been joined to the domain?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >