Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 51/437 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How is your Mac set up for Windows development?

    - by Matt Brailsford
    Hi Guys, I'm looking at buying a MacBook Pro to replace my tiring laptop. My day to day job is as a .NET web developer so I am looking to use VMware Fusion to run VS and SQL server etc. As I've not run my dev environment in a VM before, I would like to know how others are setup. What apps to you have installed? In which environment? Where do you store your files? Within each environment, or some shared drive? Are there any gotchas? Or essentials I should know. Many thanks Matt

    Read the article

  • The item you tried to buy is no longer available [Environment: Sandbox]

    - by Ansari
    I am trying to put In App purchase in my application. I had setup a consumable product which was working fine in Sandbox environment. Now i just made a new product which is non-consumable with new price tier, and deleted the old one. Update my code, with the new Product ID. When request is sent, it properly shows the right Product with newly added price tier, But when you tap on Buy button it gives you the error "The item you tried to buy is no longer available [Environment: Sandbox]". Any idea ?

    Read the article

  • Spring MVC - JSP - Place to Store Environment Specific Constants

    - by jboyd
    Where in the Spring-MVC/JSP application would you store things that need to be accessed by both the controllers and views such as environment specific base_url's, application ids to be used in javascript and so on? I've tried creating an application scoped bean and then at the top of my JSPs, but that doesn't seem to be working. <!-- Environment --> <bean id="myEnv" class="com.myapp.MyAppEnvironment" scope="application"> <property name="baseUrl" value="http://localhost:8080/myapp/"/> <property name="videoPlayerId" value="234346565"/> </bean> And using it in the following manner <jsp:useBean id="myEnv" scope="application" type="com.myapp.MyAppEnvironment"/>

    Read the article

  • Should we create Virtual Machine environment so a consultant can develop in similiar environment?

    - by ChrisNel52
    This is a large project and currently there are only 3 developers working on it. We have some money in the budget to contract development help from a software consulting firm. However, because the location of our business it would be beneficial if the consultant could do their development off-site. Also, our company policy doesn't allow contract help to VPN into our network, so that is not an option. My question is, would it be a good idea to create a Virtual Machine that copies our internal environment (particularly our database and WCF service) and give the consultant the Virtual Machine image so that they can replicate the environment at their place of work? I haven't worked much with Virtual Machines, so I'm not sure if this is a good idea or if there are huge obstacles that I'm not thinking of. If anyone has ever done anything like this, it would be great to hear the pros/cons. Any help would be appreciated.

    Read the article

  • Autotest notifications on Ubuntu virtual environment

    - by Luciano
    I am having trouble getting Rails autotest notifications to work on the Engine Yard Vagrant environment. On the Mac, I normally get the notifications via Growl. However, on the virtual environment (which runs Ubuntu) that doesn't work. I tried running Linux notification setups such as libnotify+autotest-notification, but I get the following error: libnotify-Message: Unable to get session bus: /bin/dbus-launch terminated abnormally with the following error: Autolaunch error: X11 initialization failed. ** (notify-send:1004): CRITICAL **: dbus_g_proxy_connect_signal: assertion `DBUS_IS_G_PROXY (proxy)' failed ** (notify-send:1004): CRITICAL **: dbus_g_proxy_connect_signal: assertion `DBUS_IS_G_PROXY (proxy)' failed ** (notify-send:1004): CRITICAL **: dbus_g_proxy_call: assertion `DBUS_IS_G_PROXY (proxy)' failed Another path would be to have Growl receive the notifications remotely, but I don't even know where to begin with that... Any suggestions?

    Read the article

  • Preferred Windows Java Development Environment

    - by JF
    I've been a Linux Java developer for years and have loved it. I just got a new laptop which is running Windows 7. I could wipe the drive and go back to my typical Linux dev setup: vim for editing, tabbed Bash windows running javac and java for smaller projects, ant for big projects That said, I'm really thinking it couldn't hurt to learn to develop in a new environment. So, with that in mind, are there any Windows-based Java devs out there? What setup do you like to use to get things done? It'd be interesting to hear both ways to emulate my Linux-based environment as well as completely different styles that I might benefit from trying.

    Read the article

  • Deserialization error in a new environment

    - by cerhart
    I have a web application that calls a third-party web service. When I run it locally, I have no problems, but when I move it to my production environment, I get the following error: There is an error in XML document (2, 428). Stack: at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at RMXClasses.RMXContactService.ContactService.getActiveSessions(String user, String pass) in C:\Users\hp\Documents\Visual Studio 2008\Projects\ReklamStore\RMXClasses\Web References\RMXContactService\Reference.cs:line 257 at I have used the same web config file from the production environment but it still works locally. My local machine is a running vista home edition and the production environment is windows server 2003. The application is written in asp.net 3.5, wierdly under the asp.net config tab in iis, 3.5 doesn't show up in the drop down list, although that version of the framework is installed. The error is not being thrown in my code, it happens during serialization. I called the method on the proxy, I have checked the arguments and they are OK. I have also logged the SOAP request and response, and they both look OK as well. I am really at a loss here. Any ideas? SOAP log: This is the soap response that the program seems to have trouble parsing only on server 2003. On my machine the soap is identical, and yet it parses with no problems. SoapResponse BeforeDeserialize; <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:ContactService" xmlns:ns2="http://api.yieldmanager.com/types" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:getActiveSessionsResponse> <sessions SOAP-ENC:arrayType="ns2:session[1]" xsi:type="ns2:array_of_session"> <item xsi:type="ns2:session"> <token xsi:type="xsd:string">xxxxxxxxxxxxxxxxxxxx1ae12517584b</token> <creation_time xsi:type="xsd:dateTime">2009-09-25T05:51:19Z</creation_time> <modification_time xsi:type="xsd:dateTime">2009-09-25T05:51:19Z</modification_time> <ip_address xsi:type="xsd:string">xxxxxxxxxx</ip_address> <contact_id xsi:type="xsd:long">xxxxxx</contact_id></item></sessions> </ns1:getActiveSessionsResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>

    Read the article

  • Passenger problem: "no such file to load" -- /config/environment

    - by Mason Jones
    I've been researching this one and found references to similar problems here and there, but none of them has led to a solution yet. I've installed passenger (2.2.11) and nginx (0.7.64) and when I start things up and hit a Rails URL, I get an error page informing me of a load error: no such file to load -- /path/to/app/config/environment From what I've found online this appears to be some sort of a user/permissions error, but I've tried all the logical fixes: I've made sure that /config/environment.rb is not owned by root, but by a webapp user. I've tried setting passenger_default_user, I've tried setting passenger_user_switching off. I've even tried setting the nginx user, though that shouldn't matter much. I've gotten some differing results, but nothing's actually worked. I'm hoping someone may have the magical combination of settings and permissions for this. I may try backing down to an earlier version of Passenger, because I've never had this issue before; it's been a little while since I set up Passenger though. Thanks for any suggestions.

    Read the article

  • Setting Environment Variables For NMAKE Before Building A 'Makefile Solution'

    - by John Dibling
    I have an MSVC Makefile Project in which I need to set an environment variable before running NMAKE. For x64 builds I needs to set it to one value, and for x86 builds I need to set it to something else. So for example, when doing a build I would want to SET PLATFORM=win64 if I'm building a 64-bit compile, or SET PLATFORM=win32 if I'm building 32-bit. There does not appear to be an option to set environment variables or add a pre-build even for makefile projects. How do I do this? EDIT: Running MSVC 2008

    Read the article

  • How to prevent command/script from changing global environment

    - by guillermooo
    I need to run scriptblocks/scripts from the current top-level shell and I want them to leave the global environment unmodified. So far, I've only been able to think of the following possibilities: powershell -file <script> powershell -noprofile -command <scriptblock> The problem is, that they are very slow. For instance, I would like to be able to do: mkdir newdir cd newdir $env:NEW_VAR = 100 ni -item f 'newfile.txt' ...so that my shell's working dir wouldn't change and $env:NEW_VAR wouldn't be set in the global environment. Are there any more alternatives to accomplish this?

    Read the article

  • How to check whether your code environment on Windows or on Linux or other OS

    - by justjoe
    hi, right now, i code custom wordpress theme and testing it in xampp windows XP on apache server. But as long as i concern, there's no wp build-in function to identify the code environment. Is there's any PHP build-in function to identify such thing ? for the record, what i want to code need to read a directory. in my apache (in windows), the path will be c:/xampp/htdocs where apache on linux will be \somepath\somepath\ so, is there any code solution to know what is the OS environment without i have to compare the path ? i hope it will also work on other OS with other webserver then APACHE such as IIS

    Read the article

  • ExpandEnvironmentStrings Not Expanding My Variables

    - by Adam Driscoll
    I have a process under the Run key in the registry. It is trying to access an environment variable that I have defined in a previous session. I'm using ExpandEnvironmentStrings to expand the variable within a path. The environment variable is a user profile variable. When I run my process on the command line it does not expand as well. If I call 'set' I can see the variable. Some code... CString strPath = "\\\\server\\%share%" TCHAR cOutputPath[32000]; DWORD result = ExpandEnvironmentStrings((LPSTR)&strPath, (LPSTR)&cOutputPath, _tcslen(strPath) + 1); if ( !result ) { int lastError = GetLastError(); pLog->Log(_T( "Failed to expand environment strings. GetLastError=%d"),1, lastError); } When debugging Output path is exactly the same as Path. No error code is returned. What is goin on?

    Read the article

  • What are the difference between Cygwin on windows and real UNIX environment

    - by Tarun
    Hi, I am a C/C++ developer. I have never done C++ programming on UNIX, I have done only on windows. I want to practice C++ on Unix. (Because all big companies ask C++ with Unix). I have a laptop on which i do not want to install any other OS (because i have installed very important software on it and i don't have setups) So, I searched and found CygWin which is Unix emulator for Windows. I am thinking to practice C++ on this. Please help me, how can I practice/learn in more close to the environment(Unix Environment) that is used in Big companies like IBM. What will be the difference between Unix and Cygwin?

    Read the article

  • XP machines on Domain not reporting WMI Data in a 2003 Server Environment

    - by Az
    I am running into a very quirky issue and I hope someone out there can help. We use a monitoring program for several networks we oversee that is WMI data dependent for a great deal of it's functionality. The Windows 2000 Professional workstations, as well as the 2003 servers in our network report WMI data fine, the Windows XP professional machines will not let me view them from within the WMI snap in for MMC (they return a Win32: Access Denied) error. I am of course logged in with an account with domain admin privileges on the domain controller when I attempt it. DCOM is enabled in component services, and the remote security option is set to allow as well. If we remove the machine from the domain and rejoin it, some workstations will show up as WMI enabled temporarily and then when I try to access them again later I get the access denied error again out of the blue. Hoping someone out there has had a similar problem or they have advice. I have had this problem with the firewall turned on or off. Thanks for your time! -Az

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • grep is inconsistently defaulting to grep -P?

    - by Sammitch
    I have a script that does some housekeeping that works perfectly well when invoked from an interactive shell, but did nothing when invoked by cron. To troubleshoot this I started a shell with a 'blank' environment with the command: env -i /bin/bash --noprofile --norc Using this blank env I've dug into my script and found that the following grep will not match any files: grep -il "^ws_status\s*=\s*[\"']remove[\"']$" However, when run from an interactive shell the command will return the filenames of the matching files. As a note, the expression is matching lines like: WS_STATUS = "remove" Through trial-and-error I discovered that adding -P to the options [Perl regex] the command started working normally in the 'blank' shell. However, I have no idea why my login shell appears to be defaulted to grep -P. There is only one grep binary, /bin/grep There are no aliases defined for grep=pgrep or grep="grep -P" There is no env variable GREP_OPTIONS defined. What's the deal here? Note: OS is RHEL v5.10, Bash is v3.2.25, grep is v2.5.1

    Read the article

  • P2v options within a hyper-v environment.

    - by tony roth
    I have a server that san boots that I want to p2v. I have many options disk2vhd, scvmm etc but I was thinking about cloning the lun (flexclone, netapp) presenting it to my hyper-v r2 server. Within the hv manager do a create new disk then have it copy the cloned lun to a vhd file. Then do the bcdedit\bootsect stuff to it. Should work right? I'm also curious if anybodys booting vhd's that are on bootable luns? I've booted native vhd's just fine was just curious about the running them off a bootable lun. I think that this has quite a few advantages like instant p2v etc.. any thoughts on this? hmm dang as I was typing this I realized that I should not use the hv manager new disk copy routine, I should just disk2vhd the mounted lun. This has advantages in that it should be a lot faster!! discovered that disk2vhd may be flaky, crashed the first time I ran it! thanks

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Can't access internet using a domain joined computer outside the domain environment

    - by Mike Walsh
    We had an unused box at work so took it home. It had been joined to the domain and hasn't been unjoined. When I try to use it at home (logging in with a local admin account) I can't seem to access internet pages. It gets correct IP and gateway for the local network and correct DNS servers for the home ADSL connection. I can happily ping the home router (which doesn't have any tricky firewall settings). Can't seem to ping outside, get any DNS to resolve, or (obviously) get any web pages. Is there some problem here with this having been joined to the domain?

    Read the article

  • How to manage credentials on multiserver environment

    - by rush
    I have a some software that uses its own encrypted file for password storage ( such as ftp, web and other passwords to login to external systems, there is no way to use certificates ). On each server I've several instances of this software, each instance has its own password file. At the moment number of servers is permanently growing and it's getting harder and harder to manage all passwords on all instances up to date. Unfortunately, some servers are in cegregated network and there is no access from them to some centralized storage, but it works vice versa. My first idea was to create a git repository, encrypt each password with gpg and store it there and deliver it within deployment system, but security team was not satisfied with this idea and as it is insecure to store passwords in repository even in encrypted view ( from their words ). Nothing similar comes to my mind. Is there any way to implement safe and secure password storage with minimal effort to manage all passwords up-to-date? ps. if that matters I've red hat everywhere.

    Read the article

  • hosting environment for delivering FLVs

    - by Gotys
    What would be the ideal hardware setup for pushing lots of bandwith on a tube site? We have ever-expanding cloud storage where users upload the movies, then we have these web-delivery machines which cache the FLV files on its local harddrives and deliver them to users. Each cache machine can deliver 1200 mbits/s , if it has SAS 8 harddrives. Such a cache machine costs us $550/month for 8x160gb -- so each machine can cache only 160GB at any given time. If we want to cache more then 160gb , we need to add another machine..another $550/month..etc. This is very un-economical so I am wondering if we have any experts here who can figure out a better setup. I've been looking into "gluster FS", but I am not sure if this thing can push a lot of bandwith. Any ideas highly appreciated. Thank you!

    Read the article

  • Two large, linked Excel files take 30 minutes to save, except in VMWare environment

    - by Gerald L
    I support some tax consultants who love to use Excel when they should probably be using Access. Anyway, they have created two Excel files, A and B. File B has cells linked to file A. File A is 27 MB and file B is 16 MB. One worksheet has roughly 1 million rows and there is another worksheet doing a whole bunch of SUMIF on the 1 million rows. Not the best idea, but whatever. Both Excel files open and recalculate within a reasonable amount of time (1-2 minutes). For a files that large, this is acceptable. Here is the problem: Once you change a cell, and save the file B, it takes a solid 30 minutes to save the file, and the processors are going full speed. I've tried this on 6 different machines, all running Windows XP SP3 with Office 2007 SP2 and all patches. The specs vary from one machine with 512 MB or RAM to a machine with 4 GB of RAM and quad processors. Same result every time. Here is the clincher: If I do this same save operation on a VMWare virtual machine, the file gets saved in 1 minute. I've tried this with my ESX servers at the office, my Mac Fusion at home, and VMWare workstation at the office. It does not matter how much RAM the virtual machine has... it saves in about 1 minute every time. Does anybody have any idea why this is happening and how to fix?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >